Visible to the public Effectiveness and Work Factor Metrics 2015 – 2016 (Part 2)Conflict Detection Enabled

SoS Newsletter- Advanced Book Block


SoS Logo

Effectiveness and Work Factor Metics

2015 – 2016 (Part 2)


Measurement to determine the effectiveness of security systems is an essential element of the Science of Security. The work cited here was presented in 2015 and 2016.

J. R. Ward and M. Younis, “A Cross-Layer Distributed Beamforming Approach to Increase Base Station Anonymity in Wireless Sensor Networks,” 2015 IEEE Global Communications Conference (GLOBECOM), San Diego, CA, 2015, pp. 1-7. doi: 10.1109/GLOCOM.2015.7417430
Abstract: In most applications of wireless sensor networks (WSNs), nodes act as data sources and forward measurements to a central base station (BS) that may also perform network management tasks. The critical role of the BS makes it a target for an adversary's attack. Even if a WSN employs conventional security primitives such as encryption and authentication, an adversary can apply traffic analysis techniques to find the BS. Therefore, the BS should be kept anonymous to protect its identity, role, and location. Previous work has demonstrated distributed beamforming to be an effective technique to boost BS anonymity in WSNs; however, the implementation of distributed beamforming requires significant coordination messaging that increases transmission activities and alerts the adversary to the possibility of deceptive activities. In this paper we present a novel, cross-layer design that exploits the integration of the control traffic of distributed beamforming with the MAC protocol in order to boost the BS anonymity while keeping the rate of node transmission at a normal rate. The advantages of our proposed approach include minimizing the overhead of anonymity measures and lowering the transmission power throughout the network which leads to increased spectrum efficiency and reduced energy consumption. The simulation results confirm the effectiveness our cross-layer design.
Keywords: access protocols; array signal processing; wireless sensor networks; MAC protocol; WSN; base station anonymity; central base station; cross-layer distributed beamforming approach; Array signal processing; Media Access Protocol; Schedules; Security; Synchronization; Wireless sensor networks (ID#: 16-10263)


A. Chahar, S. Yadav, I. Nigam, R. Singh and M. Vatsa, “A Leap Password Based Verification System,” Biometrics Theory, Applications and Systems (BTAS), 2015 IEEE 7th International Conference on, Arlington, VA, 2015, pp. 1-6. doi: 10.1109/BTAS.2015.7358745
Abstract: Recent developments in three-dimensional sensing devices has led to the proposal of a number of biometric modalities for non-critical scenarios. Leap Motion device has received attention from Vision and Biometrics community due to its high precision tracking. In this research, we propose Leap Password; a novel approach for biometric authentication. The Leap Password consists of a string of successive gestures performed by the user during which physiological as well as behavioral information is captured. The Conditional Mutual Information Maximization algorithm selects the optimal feature set from the extracted information. Match-score fusion is performed to reconcile information from multiple classifiers. Experiments are performed on the Leap Password Dataset, which consists of over 1700 samples obtained from 150 subjects. An accuracy of over 81% is achieved, which shows the effectiveness of the proposed approach.
Keywords: biometrics (access control); feature selection; gesture recognition; image fusion; optimisation; security of data; 3D sensing devices; Leap Motion device; Leap Password based verification system; biometric authentication; conditional mutual information maximization algorithm; gestures; high precision tracking; match-score fusion; optimal feature set selection; Feature extraction; Performance evaluation; Physiology; Sensors; Spatial resolution; Three-dimensional displays; Time measurement (ID#: 16-10264)


J. Pang and Y. Zhang, “Event Prediction with Community Leaders,” Availability, Reliability and Security (ARES), 2015 10th International Conference on, Toulouse, 2015, pp. 238-243. doi: 10.1109/ARES.2015.24
Abstract: With the emerging of online social network services, quantitative studies on social influence become achievable. Leadership is one of the most intuitive and common forms for social influence, understanding it could result in appealing applications such as targeted advertising and viral marketing. In this work, we focus on investigating leaders' influence for event prediction in social networks. We propose an algorithm based on events that users conduct to discover leaders in social communities. Analysis on the leaders that we found on a real-life social network dataset leads us to several interesting observations, such as that leaders do not have significantly higher number of friends but are more active than other community members. We demonstrate the effectiveness of leaders' influence on users' behaviors by learning tasks: given a leader has conducted one event, whether and when a user will perform the event. Experimental results show that with only a few leaders in a community the event predictions are always very effective.
Keywords: social networking (online); community leaders; event prediction; leadership; online social network services; real-life social network dataset; social influence; Entropy; Measurement; Prediction algorithms; Reliability; Social network services; Testing; Training (ID#: 16-10265)


H. Pazhoumand-Dar, M. Masek and C. P. Lam, “Unsupervised Monitoring of Electrical Devices for Detecting Deviations in Daily Routines,” 2015 10th International Conference on Information, Communications and Signal Processing (ICICS), Singapore, 2015, pp. 1-6. doi: 10.1109/ICICS.2015.7459849
Abstract: This paper presents a novel approach for automatic detection of abnorma
behaviours in daily routine of people living alone in their homes, without any manual labelling of the training dataset. Regularity and frequency of activities are monitored by estimating the status of specific electrical appliances via their power signatures identified from the composite power signal of the house. A novel unsupervised clustering technique is presented to automatically profile the power signatures of electrical devices. Then, the use of a test statistic is proposed to distinguish power signatures resulted from the occupant interactions from those of self-regulated appliances such as refrigerator. Experiments on real-world data showed the effectiveness of the proposed approach in terms of detection of the occupant's interactions with appliances as well as identifying those days that the behaviour of the occupant was outside the normal pattern.
Keywords: Monitoring; Power demand; Power measurement; Reactive power; Refrigerators; Training; abnormality detection; behaviour monitoring; power sensor; statistical measures (ID#: 16-10266)


N. Sae-Bae and N. Memon, “Quality of Online Signature Templates,” Identity, Security and Behavior Analysis (ISBA), 2015 IEEE International Conference on, Hong Kong, 2015, pp. 1-8. doi: 10.1109/ISBA.2015.7126354
Abstract: This paper proposes a metric to measure the quality of an online signature template derived from a set of enrolled signature samples in terms of its distinctiveness against random signatures. Particularly, the proposed quality score is computed based on statistical analysis of histogram features that are used as part of an online signature representation. Experiments performed on three datasets consistently confirm the effectiveness of the proposed metric as an indication of false acceptance rate against random forgeries when the system is operated at a particular decision threshold. Finally, the use of the proposed quality metric to enforce a minimum signature strength policy in order to enhance security and reliability of the system against random forgeries is demonstrated.
Keywords: counterfeit goods; digital signatures; feature extraction; random processes; statistical analysis; decision threshold; false acceptance rate; histogram features; online signature representation; online signature template quality; quality metric; quality score; random forgeries; random signatures; signature strength policy; Biometrics (access control); Forgery; Histograms; Measurement; Sociology; Standards (ID#: 16-10267)


M. Rezvani, A. Ignjatovic, E. Bertino and S. Jha, “A Collaborative Reputation System Based on Credibility Propagation in WSNs,” Parallel and Distributed Systems (ICPADS), 2015 IEEE 21st International Conference on, Melbourne, VIC, 2015, pp. 1-8. doi: 10.1109/ICPADS.2015.9
Abstract: Trust and reputation systems are widely employed in WSNs to help decision making processes by assessing trustworthiness of sensor nodes in a data aggregation process. However, in unattended and hostile environments, more sophisticated malicious attacks, such as collusion attacks, can distort the computed trust scores and lead to low quality or deceptive service as well as undermine the aggregation results. In this paper we propose a novel, local, collaborative-based trust framework for WSNs that is based on the concept of credibility propagation which we introduce. In our approach, trustworthiness of a sensor node depends on the amount of credibility that such a node receives from other nodes. In the process we also obtain an estimates of sensors' variances which allows us to estimate the true value of the signal using the Maximum Likelihood Estimation. Extensive experiments using both real-world and synthetic datasets demonstrate the efficiency and effectiveness of our approach.
Keywords: decision making; maximum likelihood estimation; telecommunication security; wireless sensor networks; WSN; collaborative reputation system; collaborative-based trust framework; credibility propagation; data aggregation process; reputation systems; sensor nodes; trust systems; Aggregates; Collaboration; Computer science; Maximum likelihood estimation; Robustness; Temperature measurement; Wireless sensor networks; collusion attacks; data aggregation; iterative filtering; reputation system (ID#: 16-10268)


X. Qu, S. Kim, D. Atnafu and H. J. Kim, “Weighted Sparse Representation Using a Learned Distance Metric for Face Recognition,” Image Processing (ICIP), 2015 IEEE International Conference on, Quebec City, QC, 2015, pp. 4594-4598. doi: 10.1109/ICIP.2015.7351677
Abstract: This paper presents a novel weighted sparse representation classification for face recognition with a learned distance metric (WSRC-LDM) which learns a Mahalanobis distance to calculate the weight and code the testing face. The Mahalanobis distance is learned by using the information-theoretic metric learning (ITML) which helps to define a better weight used in WSRC. In the meantime, the learned distance metric takes advantage of the classification rule of SRC which helps the proposed method classify more accurately. Extensive experiments verify the effectiveness of the proposed method.
Keywords: face recognition; image representation; information theory; learning (artificial intelligence); ITML; Mahalanobis distance; WSRC-LDM; information-theoretic metric learning; learned distance metric; weighted sparse representation; Encoding; Face; Face recognition; Image reconstruction; Measurement; Testing; Training; Face Recognition; Metric Learning; Weighted Sparse Representation Classification (ID#: 16-10269)


B. Niu, S. Gao, F. Li, H. Li and Z. Lu, “Protection of Location Privacy in Continuous LBSs Against Adversaries with Background Information,” 2016 International Conference on Computing, Networking and Communications (ICNC), Kauai, HI, 2016, pp. 1-6. doi: 10.1109/ICCNC.2016.7440649
Abstract: Privacy issues in continuous Location-Based Services (LBSs) have gained attractive attentions in literature over recent years. In this paper, we illustrate the limitations of existing work and define an entropy-based privacy metric to quantify the privacy degree based on a set of vital observations. To tackle the privacy issues, we propose an efficient privacy-preserving scheme, DUMMY-T, which aims to protect LBSs user's privacy against adversaries with background information. By our Dummy Locations Generating (DLG) algorithm, we first generate a set of realistic dummy locations for each snapshot with considering the minimum cloaking region and background information. Further, our proposed Dummy Paths Constructing (DPC) algorithm guarantees the location reachability by taking the maximum distance of the moving mobile users into consideration. Security analysis and empirical evaluation results further verify the effectiveness and efficiency of our DUMMY-T.
Keywords: data protection; entropy; mobile computing; mobility management (mobile radio); telecommunication security; DLG algorithm; DPC algorithm; DUMMY-T scheme; LBS user privacy protection; adversaries; background information; continuous LBS; continuous location-based services; dummy path-constructing algorithm; dummy-location generating algorithm; empirical evaluation; entropy-based privacy metric; location privacy protection; location reachability; maximum moving-mobile user distance; minimum cloaking region; privacy degree quantification; privacy-preserving scheme; security analysis; snapshots; Entropy; Measurement; Mobile communication; Privacy; Servers; System performance; Uncertainty (ID#: 16-10270)


F. Qin, Z. Zheng, C. Bai, Y. Qiao, Z. Zhang and C. Chen, “Cross-Project Aging Related Bug Prediction,” Software Quality, Reliability and Security (QRS), 2015 IEEE International Conference on, Vancouver, BC, 2015, pp. 43-48. doi: 10.1109/QRS.2015.17
Abstract: In a long running system, software tends to encounter performance degradation and increasing failure rate during execution, which is called software aging. The bugs contributing to the phenomenon of software aging are defined as Aging Related Bugs (ARBs). Lots of manpower and economic costs will be saved if ARBs can be found in the testing phase. However, due to the low presence probability and reproducing difficulty of ARBs, it is usually hard to predict ARBs within a project. In this paper, we study whether and how ARBs can be located through cross-project prediction. We propose a transfer learning based aging related bug prediction approach (TLAP), which takes advantage of transfer learning to reduce the distribution difference between training sets and testing sets while preserving their data variance. Furthermore, in order to mitigate the severe class imbalance, class imbalance learning is conducted on the transferred latent space. Finally, we employ machine learning methods to handle the bug prediction tasks. The effectiveness of our approach is validated and evaluated by experiments on two real software systems. It indicates that after the processing of TLAP, the performance of ARB bug prediction can be dramatically improved.
Keywords: learning (artificial intelligence); program debugging; program testing; software maintenance; ARB bug prediction; TLAP; aging related bugs; class imbalance learning; cross-project aging; data variance; low presence probability; machine learning method; software aging; software execution; software failure rate; software performance degradation; software system; software testing; transfer learning based aging related bug prediction approach; Aging; Computer bugs; Learning systems; Measurement; Software; Testing; Training; aging related bug; bug prediction; ross-project; transfer learning (ID#: 16-10271)


X. Gong, X. Zhang and N. Wang, “Random-Attack Median Fortification Problems with Different Attack Probabilities,” Control Conference (CCC), 2015 34th Chinese, Hangzhou, 2015, pp. 9127-9133. doi: 10.1109/ChiCC.2015.7261083
Abstract: Critical infrastructure can be lost due to random and intentional attacks. The random-attack median fortification problem has been presented to minimize the expected operation cost after pure random attacks with same facility attack probability. This paper discusses the protection problem for supply systems considering random attacks with tendentiousness, that is, some facilities have more attractions for attackers. The random-attack median fortification problem with different attack probabilities (RAM F-DP) is proposed and solved by calculating the service probabilities for all demand nodes and facilities p airs after attacking. The effectiveness of solving RAMF-DP is verified through experiments on various attack probabilities.
Keywords: cost reduction; critical infrastructures; disasters; dynamic programming; national security; probability; random processes; terrorism; RAM F-DP; critical infrastructure; demand nodes; expected operation cost minimization; facility attack probability; intentional attack; protection problem; pure random attack; random-attack median fortification problem; service probability; supply system; tendentiousness; Computational modeling; Games; Indexes; Linear programming; Mathematical model; Q measurement; Terrorism; Different attack probabilities; Median problems; Random attacks; Tendentiousness (ID#: 16-10272)


R. F. Lima and A. C. M. Pereira, “A Fraud Detection Model Based on Feature Selection and Undersampling Applied to Web Payment Systems,” 2015 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology (WI-IAT), Singapore, 2015, pp. 219-222. doi: 10.1109/WI-IAT.2015.13
Abstract: The volume of electronic transactions has raised a lot in last years, mainly due to the popularization of e-commerce. Since this popularization, we have observed a significant increase in the number of fraud cases, resulting in billions of dollars losses each year worldwide. Therefore, it is important and necessary to develop and apply techniques that can assist in fraud detection in Web transactions. Due to the large amount of data generated in electronic transactions, to find the best set of features is an essential task to identify frauds. Fraud detection is a specific application of anomaly detection, characterized by a large imbalance between the classes (e.g., fraud or non fraud), which can be a detrimental factor for feature selection techniques. In this work we evaluate the behavior and impact of feature selection techniques to detect fraud in a Web Transaction scenario, applying feature selection techniques and performing undersampling in this step. To measure the effectiveness of the feature selection approach we use some state-of-the-art classification techniques to identify frauds, using real data from one of the largest electronic payment system in Latin America. Thus, the fraud detection models comprises a feature selection and classification techniques. To evaluate our results we use metrics of F-Measure and Economic Efficiency. Our results show that the imbalance between the classes reduces the effectiveness of feature selection and the undersampling strategy applied in this task improves the final results. We achieve a very good performance in fraud detection using the proposed methodology, reducing the number of features and presenting financial gains of up to 61% compared to the actual scenario of the company.
Keywords: 1/f noise; Internet; electronic commerce; security of data; F-measure; Latin America; Web payment system; Web transactions; e-commerce; economic efficiency; electronic payment system; electronic transactions; feature selection; fraud detection model; under sampling; Computational modeling; Economics; Feature extraction; Frequency modulation; Logistics; Measurement; Yttrium; Anomaly Detection; Electronic Transactions; Feature Selection; Fraud Detection (ID#: 16-10273)


I. Kiss, B. Genge and P. Haller, “Behavior-Based Critical Cyber Asset Identification in Process Control Systems Under Cyber Attacks,” Carpathian Control Conference (ICCC), 2015 16th International, Szilvasvarad, 2015, pp. 196-201. doi: 10.1109/CarpathianCC.2015.7145073
Abstract: The accelerated advancement of Process Control Systems (PCS) transformed the traditional and completely isolated systems view into a networked inter-connected “system of systems” perspective, where off-the-shelf Information and Communication Technologies (ICT) are deeply embedded into the heart of PCS. This has brought significant economical and operational benefits, but it also provided new opportunities for malicious actors targeting critical PCS. To address these challenges, in this work we employ our previously developed Cyber Attack Impact Assessment (CAIA) technique to provide a systematic mechanism to help PCS designers and industry operators to assess the impact severity of various cyber threats. Moreover, the question of why a device is more critical than others, and also the motivation of this work, are answered through extensive numerical results showing the significance of systems dynamics in the context of closed-loop PCS. The CAIA approach is validated against the simulated Tennessee Eastman chemical process, including 41 observed variables and 12 control variables, involved in cascade controller structures. The results show the application possibilities and effectiveness of CAIA for various attack scenarios.
Keywords: closed loop systems; control engineering computing; interconnected systems; process control; production engineering computing; security of data; CAIA technique; Tennessee Eastman chemical process; behavior-based critical cyber asset identification; cascade controller structures; closed-loop PCS; critical PCS; cyber attack impact assessment; cyber threats impact severity; economical benefits; malicious actors; networked interconnected system of systems; operational benefits; process control systems; systematic mechanism; systems dynamics; Chemical processes; Feeds; Hardware; Inductors; Mathematical model; Process control; Time measurement; Control Variable; Cyber Attack; Impact Assessment; Observed Variable; Process Control Systems; System Dynamics (ID#: 16-10274)


L. Behe, Z. Wheeler, C. Nelson, B. Knopp and W. M. Pottenger, “To Be or Not to Be IID: Can Zipf's Law Help?,” Technologies for Homeland Security (HST), 2015 IEEE International Symposium on, Waltham, MA, 2015, pp. 1-6. doi: 10.1109/THS.2015.7225274
Abstract: Classification is a popular problem within machine learning, and increasing the effectiveness of classification algorithms has many significant applications within industry and academia. In particular, focus will be given to Higher-Order Naive Bayes (HONB), a relational variant of the famed Naive Bayes (NB) statistical classification algorithm that has been shown to outperform Naive Bayes in many cases [1,10]. Specifically, HONB has outperformed NB on character n-gram based feature spaces when the available training data is small [2]. In this paper, a correlation is hypothesized between the performance of HONB on character n-gram feature spaces and how closely the feature space distribution follows Zipf's Law. This hypothesis stems from the overarching goal of ultimately understanding HONB and knowing when it will outperform NB. Textual datasets ranging from several thousand instances to nearly 20,000 instances, some containing microtext, were used to generate character n-gram feature spaces. HONB and NB were both used to model these datasets, using varying character n-gram sizes (2-7) and dictionary sizes up to 5000 features. The performances of HONB and NB were then compared, and the results show potential support for our hypothesis: namely, the results support the hypothesized correlation for the Accuracy and Precision metrics. Additionally, a solution is provided for an open problem which was presented in [1], giving an explicit formula for the number of SDRs from k given sets, which has connections to counting higher-order paths of arbitrary length, which are important in the learning stage of HONB.
Keywords: Bayes methods; learning (artificial intelligence); natural language processing; pattern classification; text analysis; HONB; IDD; Zipf's law; accuracy metrics; character n-gram based feature spaces; character n-gram feature spaces; classification algorithms; feature space distribution; higher-order naive Bayes; independent and identically distributed; machine learning; naive Bayes statistical classification algorithm; precision metrics; textual datasets; Accuracy; Classification algorithms; Correlation; Earthquakes; Measurement; Niobium; Prediction algorithms (ID#: 16-10275)


Q. Yang, Rui Min, D. An, W. Yu and X. Yang, “Towards Optimal PMU Placement Against Data Integrity Attacks in Smart Grid,” 2016 Annual Conference on Information Science and Systems (CISS), Princeton, NJ, USA, 2016, pp. 54-58. doi: 10.1109/CISS.2016.7460476
Abstract: State estimation plays a critical role in self-detection and control of the smart grid. Data integrity attacks (also known as false data injection attacks) have shown significant potential in undermining the state estimation of power systems, and corresponding countermeasures have drawn increased scholarly interest. In this paper, we consider the existing least-effort attack model that computes the minimum number of sensors that must be compromised in order to manipulate a given number of states, and develop an effective greedy-based algorithm for optimal PMU placement to defend against data integrity attacks. We develop a greedy-based algorithm for optimal PMU placement, which can not only combat data integrity attacks, but also ensure the system observability with low overhead. The experimental data obtained based on IEEE standard systems demonstrates the effectiveness of the proposed defense scheme against data integrity attacks.
Keywords: Observability; Phasor measurement units; Power grids; Security; Sensors; State estimation; Data integrity attacks; defense strategy; optimal PMU placement; state estimation; system observability (ID#: 16-10276)


Y. Zhauniarovich, A. Philippov, O. Gadyatskaya, B. Crispo and F. Massacci, “Towards Black Box Testing of Android Apps,” Availability, Reliability and Security (ARES), 2015 10th International Conference on, Toulouse, 2015, pp. 501-510. doi: 10.1109/ARES.2015.70
Abstract: Many state-of-art mobile application testing frameworks (e.g., Dynodroid [1], EvoDroid [2]) enjoy Emma [3] or other code coverage libraries to measure the coverage achieved. The underlying assumption for these frameworks is availability of the app source code. Yet, application markets and security researchers face the need to test third-party mobile applications in the absence of the source code. There exists a number of frameworks both for manual and automated test generation that address this challenge. However, these frameworks often do not provide any statistics on the code coverage achieved, or provide coarse-grained ones like a number of activities or methods covered. At the same time, given two test reports generated by different frameworks, there is no way to understand which one achieved better coverage if the reported metrics were different (or no coverage results were provided). To address these issues we designed a framework called BBOXTESTER that is able to generate code coverage reports and produce uniform coverage metrics in testing without the source code. Security researchers can automatically execute applications exploiting current state-of-art tools, and use the results of our framework to assess if the security-critical code was covered by the tests. In this paper we report on design and implementation of BBOXTESTER and assess its efficiency and effectiveness.
Keywords: Android (operating system); mobile computing; program testing; security of data; Android apps; BBOXTESTER; automated test generation; black box testing; code coverage report generation; coverage metrics; manual test generation; security-critical code; third-party mobile application testing; Androids; Humanoid robots; Instruments; Java; Measurement; Runtime; Testing (ID#: 16-10277)


J. Armin, B. Thompson, D. Ariu, G. Giacinto, F. Roli and P. Kijewski, “2020 Cybercrime Economic Costs: No Measure No Solution,” Availability, Reliability and Security (ARES), 2015 10th International Conference on, Toulouse, 2015, pp. 701-710. doi: 10.1109/ARES.2015.56
Abstract: Governments needs reliable data on crime in order to both devise adequate policies, and allocate the correct revenues so that the measures are cost-effective, i.e., The money spent in prevention, detection, and handling of security incidents is balanced with a decrease in losses from offences. The analysis of the actual scenario of government actions in cyber security shows that the availability of multiple contrasting figures on the impact of cyber-attacks is holding back the adoption of policies for cyber space as their cost-effectiveness cannot be clearly assessed. The most relevant literature on the topic is reviewed to highlight the research gaps and to determine the related future research issues that need addressing to provide a solid ground for future legislative and regulatory actions at national and international levels.
Keywords: government data processing; security of data; cyber security; cyber space; cyber-attacks; cybercrime economic cost; economic costs; Computer crime; Economics; Measurement; Organizations; Reliability; Stakeholders (ID#: 16-10278)


P. Pantazopoulos and I. Stavrakakis, “Low-Cost Enhancement of the Intra-Domain Internet Robustness Against Intelligent Node Attacks,” Design of Reliable Communication Networks (DRCN), 2015 11th International Conference on the, Kansas City, MO, 2015, pp. 219-226. doi: 10.1109/DRCN.2015.7149016
Abstract: Internet vulnerability studies typically consider highly central nodes as favorable targets of intelligent (malicious) attacks. Heuristics that use redundancy adding k extra links in the topology are a common class of countermeasures seeking to enhance Internet robustness. To identify the nodes to be linked most previous works propose very simple centrality criteria that lack a clear rationale and only occasionally address Intra-domain topologies. More importantly, the implementation cost induced by adding lengthy links between nodes of remote network locations is rarely taken into account. In this paper, we explore cost-effective link additions in the locality of the targets having the k extra links added only between their first neighbors. We introduce an innovative link utility metric that identifies which pair of a target's neighbors aggregates the most shortest paths coming from the rest of the nodes and therefore could enhance the network connectivity, if linked. This metric drives the proposed heuristic that solves the problem of assigning the link budget k to the neighbors of the targets. By employing a rich Intra-domain networks dataset we first conduct a proof-of-concept study to validate the effectiveness of the metric. Then we compare our approach with the so-far most effective heuristic that does not bound the length of the added links. Our results suggest that the proposed enhancement can closely approximate the connectivity levels the so-far winner yields, yet with up to eight times lower implementation cost.
Keywords: Internet; computer network security; telecommunication links; telecommunication network topology; innovative link utility metric; intelligent node attack; intradomain internet robustness low-cost enhancement; intradomain topology; network connectivity enhancement; proof-of-concept study; Communication networks;  Measurement; Network topology; Reliability engineering; Robustness; Topology (ID#: 16-10279)


M. Bradbury, M. Leeke and A. Jhumka, “A Dynamic Fake Source Algorithm for Source Location Privacy in Wireless Sensor Networks,” Trustcom/BigDataSE/ISPA, 2015 IEEE, Helsinki, 2015, pp. 531-538. doi: 10.1109/Trustcom.2015.416
Abstract: Wireless sensor networks (WSNs) are commonly used in asset monitoring applications, where it is often desirable for the location of the asset being monitored to be kept private. The source location privacy (SLP) problem involves protecting the location of a WSN source node from an attacker who is attempting to locate it. Among the most promising approaches to the SLP problem is the use of fake sources, with much existing research demonstrating their efficacy. Despite the effectiveness of the approach, the most effective algorithms providing SLP require network and situational knowledge that makes their deployment impractical in many contexts. In this paper, we develop a novel dynamic fake sources-based algorithm for SLP. We show that the algorithm provides state-of-the-art levels of location privacy under practical operational assumptions.
Keywords: data privacy; telecommunication security; wireless sensor networks; SLP problem; WSN source node; asset monitoring applications; dynamic fake source algorithm; location protection; source location privacy problem; wireless sensor networks; Context; Heuristic algorithms; Monitoring; Position measurement; Privacy; Temperature sensors; Wireless sensor networks; Dynamic; Sensor Networks; Source Location Privacy (ID#: 16-10280)


J. R. Ward and M. Younis, “Base Station Anonymity Distributed Self-Assessment in Wireless Sensor Networks,” Intelligence and Security Informatics (ISI), 2015 IEEE International Conference on, Baltimore, MD, 2015, pp. 103-108. doi: 10.1109/ISI.2015.7165947
Abstract: In recent years, Wireless Sensor Networks (WSNs) have become valuable assets to both the commercial and military communities with applications ranging from industrial control on a factory floor to reconnaissance of a hostile border. In most applications, the sensors act as data sources and forward information generated by event triggers to a central sink or base station (BS). The unique role of the BS makes it a natural target for an adversary that desires to achieve the most impactful attack possible against a WSN with the least amount of effort. Even if a WSN employs conventional security mechanisms such as encryption and authentication, an adversary may apply traffic analysis techniques to identify the BS. This motivates a significant need for improved BS anonymity to protect the identity, role, and location of the BS. Previous work has proposed anonymity-boosting techniques to improve the BS's anonymity posture, but all require some amount of overhead such as increased energy consumption, increased latency, or decreased throughput. If the BS understood its own anonymity posture, then it could evaluate whether the benefits of employing an anti-traffic analysis technique are worth the associated overhead. In this paper we propose two distributed approaches to allow a BS to assess its own anonymity and correspondingly employ anonymity-boosting techniques only when needed. Our approaches allow a WSN to increase its anonymity on demand, based on real-time measurements, and therefore conserve resources. The simulation results confirm the effectiveness of our approaches.
Keywords: security of data; wireless sensor networks; WSN; anonymity-boosting techniques; anti-traffic analysis technique; base station; base station anonymity distributed self-assessment; conventional security mechanisms; improved BS anonymity; Current measurement; Energy consumption; Entropy; Protocols; Sensors; Wireless sensor networks; anonymity; location privacy
(ID#: 16-10281)


L. Ren, C. Gong, Q. Shen and H. Wang, “A Method for Health Monitoring of Power MOSFETs Based on Threshold Voltage,” Industrial Electronics and Applications (ICIEA), 2015 IEEE 10th Conference on, Auckland, 2015, pp. 1729-1734. doi: 10.1109/ICIEA.2015.7334390
Abstract: The prognostics and health management (PHM) of airborne equipment plays an important role in ensuring the security of flight and improving the ratio of combat readiness. The widely use of electronics equipment in aircraft is now making the PHM technology for power electronics devices become more important. It is the main circuit devices that are proved to have high failure rate in power equipment. This paper does some research about the fault feature extraction of power metal oxide semiconductor field effect transistor (MOSFET). Firstly, the failure mechanism and failure feature of active power switches are analyzed in this paper, and the junction temperature is indicated to be an overall parameter for the health monitoring of MOSFET. Then, a health monitoring method based on the threshold voltage is proposed. For buck converter, a measuring method of the threshold voltage is proposed, which is simple to realize and of high precision. Finally, the simulation and experimental results verify the effectiveness of the proposed measuring method.
Keywords: monitoring; power MOSFET; power electronics; active power switches; airborne equipment; buck converter; electronics equipment; failure mechanism; fault feature extraction; health monitoring; junction temperature; power electronics devices; prognostics and health management; threshold voltage; Aging; Degradation; Junctions; MOSFET; Temperature; Temperature measurement; Threshold voltage; Buck converter; The prognostics and health management (PHM); the failure mechanism (ID#: 16-10282)


T. Saito, H. Miyazaki, T. Baba, Y. Sumida and Y. Hori, “Study on Diffusion of Protection/Mitigation Against Memory Corruption Attack in Linux Distributions,” Innovative Mobile and Internet Services in Ubiquitous Computing (IMIS), 2015 9th International Conference on, Blumenau, 2015, pp. 525-530. doi: 10.1109/IMIS.2015.73
Abstract: Memory corruption attacks that exploit software vulnerabilities have become a serious problem on the Internet. Effective protection and/or mitigation technologies aimed at countering these attacks are currently being provided with operating systems, compilers, and libraries. Unfortunately, the attacks continue. One of the reasons for this state of affairs can be attributed to the uneven diffusion of the latest (and thus most potent) protection and/or mitigation technologies. This results because attackers are likely to have found ways of circumventing most well-known older versions, thus causing them to lose effectiveness. Therefore, in this paper, we will explore diffusion of relatively new technologies, and analyze the results of a Linux distributions survey.
Keywords: Linux; security of data; Internet; Linux distributions; memory corruption attack mitigation; memory corruption attack protection; software vulnerabilities; Buffer overflows; Geophysical measurement techniques; Ground penetrating radar; Kernel; Libraries;  Anti-thread; Buffer Overflow; Diffusion of countermeasure techniques; Memory corruption attacks (ID#: 16-10283)


X. Zhou, H. Wang and J. Zhao, “A Fault-Localization Approach Based on the Coincidental Correctness Probability,” Software Quality, Reliability and Security (QRS), 2015 IEEE International Conference on, Vancouver, BC, 2015, pp. 292-297. doi: 10.1109/QRS.2015.48
Abstract: Coverage-based fault localization is a spectrum-based technique that identifies the executing program elements that correlate with failure. However, the effectiveness of coverage-based fault localization suffers from the effect of coincidental correctness which occurs when a fault is executed but no failure is detected. Coincidental correctness is prevalent and proved as a safety reducing factor for the coverage-based fault location techniques. In this paper, we propose a new fault-localization approach based on the coincidental correctness probability. We estimate the probability that coincidental correctness happens for each program execution using dynamic data-flow analysis and control-flow analysis. To evaluate our approach, we use safety and precision as evaluation metrics. Our experiment involved 62 seeded versions of C programs from SIR. We discuss the comparison results with Tarantula and two improved CBFL techniques cleansing test suites from coincidental correctness. The results show that our approach can improve the safety and precision of the fault-localization technique to a certain degree.
Keywords: data flow analysis; probability; program testing; software fault tolerance; C programs; CBFL techniques; Tarantula; coincidental correctness probability; control-flow analysis; coverage-based fault localization; coverage-based fault location techniques; dynamic data-flow analysis; evaluation metrics; failure; fault-localization approach; precision; probability estimation; program elements; program execution; safety reducing factor; software testing; spectrum-based technique; test suites; Algorithm design and analysis; Circuit faults; Estimation; Heuristic algorithms; Lead; Measurement; Safety; coincidental correctness; fault localization (ID#: 16-10284)


P. Xu, Q. Miao, T. Liu and X. Chen, “Multi-Direction Edge Detection Operator,” 2015 11th International Conference on Computational Intelligence and Security (CIS), Shenzhen, 2015, pp. 187-190. doi: 10.1109/CIS.2015.53
Abstract: Due to the noise in the images, the edges extracted from these noisy images are always discontinuous and inaccurate by traditional operators. In order to solve these problems, this paper proposes multi-direction edge detection operator to detect edges from noisy images. The new operator is designed by introducing the shear transformation into the traditional operator. On the one hand, the shear transformation can provide a more favorable treatment for directions, which can make the new operator detect edges in different directions and overcome the directional limitation in the traditional operator. On the other hand, all the single pixel edge images in different directions can be fused. In this case, the edge information can complement each other. The experimental results indicate that the new operator is superior to the traditional ones in terms of the effectiveness of edge detection and the ability of noise rejection.
Keywords: edge detection; image denoising; mathematical operators; transforms; edge extraction; multidirection edge detection operator; noise rejection ability; noisy images; shear transformation; single-pixel edge images; Computed tomography; Convolution; Image edge detection; Noise measurement; Sensitivity; Standards; Wavelet transforms; false edges; matched edges; the shear transformation (ID#: 16-10285)


W. Li, B. Niu, H. Li and F. Li, “Privacy-Preserving Strategies in Service Quality Aware Location-Based Services,” 2015 IEEE International Conference on Communications (ICC), London, 2015, pp. 7328-7334. doi: 10.1109/ICC.2015.7249497
Abstract: The popularity of Location-Based Services (LBSs) have resulted in serious privacy concerns recently. Mobile users may lose their privacy while enjoying kinds of social activities due to the untrusted LBS servers. Many Privacy Protection Mechanisms (PPMs) are proposed in literature by employing different strategies, which come at the cost of either system overhead, or service quality, or both of them. In this paper, we design privacy-preserving strategies for both of the users and adversaries in service quality aware LBSs. Different from existing approaches, we first define and point out the importance of the Fine-Grained Side Information (FGSI) over existing concept of the side information, and propose a Dual-Privacy Metric (DPM) and Service Quality Metric (SQM). Then, we build analytical frameworks that provide privacy-preserving strategies for mobile users and the adversaries to achieve their goals, respectively. Finally, the evaluation results show the effectiveness of our proposed frameworks and the strategies.
Keywords: data protection; mobility management (mobile radio); quality of service; DPM; FGSI; LBS; PPM; SQM; dual-privacy metric; fine-grained side information; mobile user; privacy protection mechanism; privacy-preserving strategy; service quality aware location-based service; service quality metric; Information systems; Measurement; Mobile radio mobility management; Privacy; Security; Servers (ID#: 16-10286)


S. Debroy, P. Calyam, M. Nguyen, A. Stage and V. Georgiev, “Frequency-Minimal Moving Target Defense Using Software-Defined Networking,” 2016 International Conference on Computing, Networking and Communications (ICNC), Kauai, HI, 2016, pp. 1-6. doi: 10.1109/ICCNC.2016.7440635
Abstract: With the increase of cyber attacks such as DoS, there is a need for intelligent counter-strategies to protect critical cloud-hosted applications. The challenge for the defense is to minimize the waste of cloud resources and limit loss of availability, yet have effective proactive and reactive measures that can thwart attackers. In this paper we address the defense needs by leveraging moving target defense protection within Software-Defined Networking-enabled cloud infrastructure. Our novelty is in the frequency minimization and consequent location selection of target movement across heterogeneous virtual machines based on attack probability, which in turn minimizes cloud management overheads. We evaluate effectiveness of our scheme using a large-scale GENI testbed for a just-in-time news feed application setup. Our results show low attack success rate and higher performance of target application in comparison to the existing static moving target defense schemes that assume homogenous virtual machines.
Keywords: cloud computing; computer network security; software defined networking; DoS; critical cloud-hosted applications; cyber attacks; frequency-minimal moving target defense; heterogeneous virtual machines; intelligent counter-strategies; software-defined networking-enabled cloud infrastructure; Bandwidth; Cloud computing; Computer crime; Feeds; History; Loss measurement; Time-frequency analysis (ID#: 16-10287)


Y. Nakahira and Y. Mo, “Dynamic State Estimation in the Presence of Compromised Sensory Data,” 2015 54th IEEE Conference on Decision and Control (CDC), Osaka, 2015, pp. 5808-5813. doi: 10.1109/CDC.2015.7403132
Abstract: In this article, we consider the state estimation problem of a linear time invariant system in adversarial environment. We assume that the process noise and measurement noise of the system are l∞ functions. The adversary compromises at most γ sensors, the set of which is unknown to the estimation algorithm, and can change their measurements arbitrarily. We first prove that if after removing a set of 2γ sensors, the system is undetectable, then there exists a destabilizing noise process and attacker's input to render the estimation error unbounded. For the case that the system remains detectable after removing an arbitrary set of 2γ sensors, we construct a resilient estimator and provide an upper bound on the l∞ norm of the estimation error. Finally, a numerical example is provided to illustrate the effectiveness of the proposed estimator design.
Keywords: invariance; linear systems; measurement errors; measurement uncertainty; state estimation; compromised sensory data; dynamic state estimation; estimation error; estimator design; l∞ functions; linear time invariant system; measurement noise; measurements arbitrarily; process noise; Estimation error; Robustness; Security; Sensor systems; State estimation (ID#: 16-10288)


M. Kargar, A. An, N. Cercone, P. Godfrey, J. Szlichta and X. Yu, “Meaningful Keyword Search in Relational Databases with Large and Complex Schema,” 2015 IEEE 31st International Conference on Data Engineering, Seoul, 2015, pp. 411-422. doi: 10.1109/ICDE.2015.7113302
Abstract: Keyword search over relational databases offers an alternative way to SQL to query and explore databases that is effective for lay users who may not be well versed in SQL or the database schema. This becomes more pertinent for databases with large and complex schemas. An answer in this context is a join tree spanning tuples containing the query's keywords. As there are potentially many answers to the query, and the user is often only interested in seeing the top-k answers, how to rank the answers based on their relevance is of paramount importance. We focus on the relevance of join as the fundamental means to rank answers. We devise means to measure relevance of relations and foreign keys in the schema over the information content of the database. This can be done offline with no need for external models. We compare the proposed measures against a gold standard we derive from a real workload over TPC-E and evaluate the effectiveness of our methods. Finally, we test the performance of our measures against existing techniques to demonstrate a marked improvement, and perform a user study to establish naturalness of the ranking of the answers.
Keywords: SQL; query processing; relational databases; trees (mathematics); SQL; TPC-E; answer ranking; complex schema; database querying; foreign keys; join tree spanning tuples; keyword search; large schema; query answering; relation relevance measurement; relational databases; Companies; Gold; Indexes; Keyword search; Relational databases; Security (ID#: 16-10289)


H. B. M. Shashikala, R. George and K. A. Shujaee, “Outlier Detection in Network Data Using the Betweenness Centrality,” SoutheastCon 2015, Fort Lauderdale, FL, 2015, pp. 1-5. doi: 10.1109/SECON.2015.7133008
Abstract: Outlier detection has been used to detect and, where appropriate, remove anomalous observations from data. It has important applications in the field of fraud detection, network robustness analysis, and intrusion detection. In this paper, we propose a Betweenness Centrality (BEC) as novel to determine the outlier in network analyses. The Betweenness Centrality of a vertex in a graph is a measure for the participation of the vertex in the shortest paths in the graph. The Betweenness centrality is widely used in network analyses. Especially in a social network, the recursive computation of the betweenness centralities of vertices is performed for the community detection and finding the influential user in the network. In this paper, we propose that this method is efficient in finding outlier in social network analyses. Furthermore we show the effectiveness of the new methods using the experiments data.
Keywords: fraud; graph theory; recursive estimation; security of data; social networking (online); BEC; betweenness centrality; community detection; fraud detection; graph analysis; intrusion detection; network data; network robustness analysis; outlier detection; recursive computation; social network analyses; vertices; Atmospheric measurements; Chaos; Particle measurements; Presses; adjacency matrix; (ID#: 16-10290)


E. Lagunas, M. G. Amin and F. Ahmad, “Through-the-Wall Radar Imaging for Heterogeneous Walls Using Compressive Sensing,” Compressed Sensing Theory and its Applications to Radar, Sonar and Remote Sensing (CoSeRa), 2015 3rd International Workshop on, Pisa, 2015, pp. 94-98. doi: 10.1109/CoSeRa.2015.7330271
Abstract: Front wall reflections are considered one of the main challenges in sensing through walls using radar. This is especially true under sparse time-space or frequency-space sampling of radar returns which may be required for fast and efficient data acquisition. Unlike homogeneous walls, heterogeneous walls have frequency and space varying characteristics which violate the smooth surface assumption and cause significant residuals under commonly used wall clutter mitigation techniques. In the proposed approach, the phase shift and the amplitude of the wall reflections are estimated from the compressive measurements using a Maximum Likelihood Estimation (MLE) procedure. The estimated parameters are used to model electromagnetic (EM) wall returns, which are subsequently subtracted from the total radar returns, rendering wall-reduced and wall-free signals. Simulation results are provided, demonstrating the effectiveness of the proposed technique and showing its superiority over existing methods.
Keywords: compressed sensing; data acquisition; image sampling; maximum likelihood estimation; radar clutter; radar imaging; EM wall return; MLE procedure; compressive sensing; electromagnetic wall return; frequency-space sampling; front wall reflection; heterogeneous wall; maximum likelihood estimation procedure; sparse time-space sampling; through-the-wall radar imaging; wall clutter mitigation technique; Antenna measurements; Arrays; Maximum likelihood estimation; Radar antennas; Radar imaging (ID#: 16-10291)


Rong Jin and Kai Zeng, “Physical Layer Key Agreement Under Signal Injection Attacks,” Communications and Network Security (CNS), 2015 IEEE Conference on, Florence, 2015, pp. 254-262. doi: 10.1109/CNS.2015.7346835
Abstract: Physical layer key agreement techniques derive a symmetric cryptographic key from the wireless fading channel between two wireless devices by exploiting channel randomness and reciprocity. Existing works mainly focus on the security analysis and protocol design of the techniques under passive attacks. The study on physical layer key agreement techniques under active attacks is largely open. In this paper, we present a new form of high threatening active attack, named signal injection attack. By injecting the similar signals to both keying devices, the attacker aims at manipulating the channel measurements and compromising a portion of the key. We further propose a countermeasure to the signal injection attack, PHY-UIR (PHYsical layer key agreement with User Introduced Randomness). In PHY-UIR, both keying devices independently introduce randomness into the channel probing frames, and compose common random series by combining the randomness in the fading channel and the ones introduced by users together. With this solution, the composed series and injected signals become uncorrelated. Thus, the final key will automatically exclude the contaminated portion related to injected signal while persisting the other portion related to random fading channel. Moreover, the contaminated composed series at two keying devices become decorrelated, which help detect the attack. We analyze the security strength of PHY-UIR and conduct extensive simulations to evaluate it Both theoretical analysis and simulations demonstrate the effectiveness of PHY-UIR. We also perform proof-of-concept experiments by using software defined radios in a real-world environment. We show that signal injection attack is feasible in practice and leads to a strong correlation (0.75) between the injected signal and channel measurements at legitimate users for existing key generation methods. PHY-UIR is immune to the signal injection attack and results in low correlation (0.15) between the injected signal and the composed random signals at legitimate users.
Keywords: cryptography; fading channels; telecommunication security; PHY-UIR; channel measurements; channel probing frames; channel randomness; pHY-mR; physical layer key agreement techniques; physical layer key agreement with user introduced randomness; protocol design; random fading channel; reciprocity; security analysis; security strength; signal injection attack; signal injection attacks; symmetric cryptographic key; theoretical analysis; wireless fading channel; Clocks; Cryptography; DH-HEMTs; Niobium; Protocols; Yttrium (ID#: 16-10292)


X. Zhang, X. Yang, J. Lin and W. Yu, “On False Data Injection Attacks Against the Dynamic Microgrid Partition in the Smart Grid,” 2015 IEEE International Conference on Communications (ICC), London, 2015, pp. 7222-7227. doi: 10.1109/ICC.2015.7249479
Abstract: To enhance the reliability and efficiency of energy service in the smart grid, the concept of the microgrid has been proposed. Nonetheless, how to secure the dynamic microgrid partition process is essential in the smart grid. In this paper, we address the security issue of the dynamic microgrid partition process and systematically investigate three false data injection attacks against the dynamic microgrid partition process. Particularly, we first discussed the dynamic microgrid partition problem based on a Connected Graph Constrained Knapsack Problem (CGKP) algorithm. We then developed a theoretical model and carried out simulations to investigate the impacts of these false data injection attacks on the effectiveness of the dynamic microgrid partition process. Our theoretical and simulation results show that the investigated false data injection attacks can disrupt the dynamic microgrid partition process and pose negative impacts on the balance of energy demand and supply within microgrids such as an increased number of lack-nodes and increased energy loss in microgrids.
Keywords: computer network security; distributed power generation; graph theory; knapsack problems; power engineering computing; power system management; power system measurement; power system reliability; smart power grids; algorithm; connected graph constrained knapsack problem; dynamic microgrid partition process security; energy service efficiency; false data injection attacks; smart power grid reliability; Energy loss; Heuristic algorithms; Microgrids; Partitioning algorithms; Smart grids; Smart meters
(ID#: 16-10293)


X. Zhao, F. Deng, H. Liang and L. Zhou, “Monitoring the Deformation of the Facade of a Building Based on Terrestrial Laser Point-Cloud,” 2015 11th International Conference on Computational Intelligence and Security (CIS), Shenzhen, 2015, pp. 183-186. doi: 10.1109/CIS.2015.52
Abstract: When terrestrial laser point-cloud data are employed for monitoring the façade of a building, point-cloud data collected in different phases cannot be used directly to calculate the deforming displacement due to data points in a homogeneous region caused by inhomogeneous sampling. Aiming at this problem, a triangular patch is built for the previous point-cloud data, the distance is measured between the latter point-cloud data and the former patch in the homogeneous region, and thus the distance of the deforming displacement is determined. On this basis, the software of laser point-cloud monitoring analysis is developed and three series of experiments are designed to verify the effectiveness of the method.
Keywords: buildings (structures); condition monitoring; deformation; distance measurement; structural engineering; building façade deformation monitoring; data points; deforming displacement; distance measurement; homogeneous region; inhomogeneous sampling; laser point-cloud monitoring analysis; point-cloud data; terrestrial laser point-cloud; triangular patch; Buildings; Data models; Deformable models; Mathematical model; Monitoring; Reliability; Three-dimensional displays; building façade; deformation monitoring; point-cloud (ID#: 16-10294)


H. Alizadeh, A. Khoshrou and A. Zúquete, “Traffic Classification and Verification Using Unsupervised Learning of Gaussian Mixture Models,” Measurements & Networking (M&N), 2015 IEEE International Workshop on, Coimbra, 2015, pp. 1-6. doi: 10.1109/IWMN.2015.7322980
Abstract: This paper presents the use of unsupervised Gaussian Mixture Models (GMMs) for the production of per-application models using their flows' statistics in order to be exploited in two different scenarios: (i) traffic classification, where the goal is to classify traffic flows by application (ii) traffic verification or traffic anomaly detection, where the aim is to confirm whether or not traffic flow generated by the claimed application conforms to its expected model. Unlike the first scenario, the second one is a new research path that has received less attention in the scope of Intrusion Detection System (IDS) research. The term “unsupervised” refers to the method ability to select the optimal number of components automatically without the need of careful initialization. Experiments are carried out using a public dataset collected from a real network. Favorable results indicate the effectiveness of unsupervised GMMs.
Keywords: Gaussian processes; computer network security; mixture models; pattern classification; security of data; telecommunication traffic; unsupervised learning; Gaussian mixture model; IDS; intrusion detection system; traffic anomaly detection; traffic classification; traffic flow; traffic verification; unsupervised GMM; unsupervised learning; Accuracy; Feature extraction; Mixture models; Payloads; Ports (Computers); Protocols; Training (ID#: 16-10295)


N. Matyunin, J. Szefer, S. Biedermann and S. Katzenbeisser, “Covert Channels Using Mobile Device's Magnetic Field Sensors,” 2016 21st Asia and South Pacific Design Automation Conference (ASP-DAC), Macau, 2016, pp. 525-532. doi: 10.1109/ASPDAC.2016.7428065
Abstract: This paper presents a new covert channel using smartphone magnetic sensors. We show that modern smartphones are capable to detect the magnetic field changes induced by different computer components during I/O operations. In particular, we are able to create a covert channel between a laptop and a mobile device without any additional equipment, firmware modifications or privileged access on either of the devices. We present two encoding schemes for the covert channel communication and evaluate their effectiveness.
Keywords: encoding; magnetic field measurement; magnetic sensors; smart phones; I/O operations; computer components; covert channels; encoding schemes; laptop; magnetic field changes; magnetic field sensors; mobile device; smartphone magnetic sensors; Encoding; Hardware; Magnetic heads; Magnetic sensors; Magnetometers; Portable computers (ID#: 16-10296)


H. Wei, Y. Zhang, D. Guo and X. Wei, “CARISON: A Community and Reputation Based Incentive Scheme for Opportunistic Networks,” 2015 Fifth International Conference on Instrumentation and Measurement, Computer, Communication and Control (IMCCC), Qinhuangdao, 2015, pp. 1398-1403. doi: 10.1109/IMCCC.2015.299
Abstract: Forwarding messages in opportunistic networks incurs costs for nodes in terms of storage and energy. Some nodes become selfish or even malicious. The selfish and malicious behaviors depress the connectivity of opportunistic networks. To tackle this issue, in this paper, we propose CARISON: a community and reputation based incentive scheme for opportunistic networks. CARISON allows every node belongs to a community and manages its reputation evidence and demonstrate its reputation whenever necessary. In order to kick out malicious nodes we propose altruism function which is a critical factor. Besides, considering the social attributes of nodes, we propose two ways to calculate reputation: intra-community reputation calculating and inter-community reputation calculating. Meanwhile this paper proposes the binary exponent punishment strategy to punish the nodes with low reputation. Extensive performance analysis and simulations are given to demonstrate the effectiveness and efficiency of the proposed scheme.
Keywords: cooperative communication; incentive schemes; telecommunication security; CARISON; altruism function; binary exponent punishment strategy; community and reputation based incentive scheme; inter-community reputation calculating; intra-community reputation calculating; malicious behaviors; malicious nodes; opportunistic networks; reputation evidence; selfish behaviors; social attributes; Analytical models; Computational modeling; Computers; History; Incentive schemes; Monitoring; Performance analysis; Altruism function; Binary exponent punishment strategy; Community; Opportunistic networks; Reputation based incentive; Selfish (ID#: 16-10297)


Z. Pang, F. Hou, Y. Zhou and D. Sun, “Design of False Data Injection Attacks for Output Tracking Control of CARMA Systems,” Information and Automation, 2015 IEEE International Conference on, Lijiang, 2015, pp. 1273-1277. doi: 10.1109/ICInfA.2015.7279482
Abstract: Considerable attention has focused on the problem of cyber-attacks on cyber-physical systems in recent years. In this paper, we consider a class of single-input single-output systems which are described by a controlled auto-regressive moving average (CARMA) model. A PID controller is designed to make the system output track the reference signal. Then the state-space model of the controlled plant and the corresponding Kalman filter are employed to generate stealthy false data injection attacks for the sensor measurements, which can destroy the control system performance without being detected by an online parameter identification algorithm. Finally, two numerical simulation results are given to demonstrate the effectiveness of the proposed false data injection attacks.
Keywords: Kalman filters; autoregressive moving average processes; control system synthesis; security of data; state-space methods; three-term control; CARMA systems; Kalman filter; PID controller design; controlled auto-regressive moving average; false data injection attacks; online parameter identification algorithm; output tracking control; single-input single-output systems; state-space model; Conferences; Control systems; Detectors; Mathematical model; Parameter estimation; Smart grids; CARMA model; Cyber-physical systems (CPSs); output feedback control (ID#: 16-10298)


Y. Hu and M. Sun, “Synchronization of a Class of Hyperchaotic Systems via Backstepping and Its Application to Secure Communication,” 2015 Fifth International Conference on Instrumentation and Measurement, Computer, Communication and Control (IMCCC), Qinhuangdao, 2015, pp. 1055-1060. doi: 10.1109/IMCCC.2015.228
Abstract: Researches on multi-scroll hyper chaotic systems, which present excellent activities in secure communication, are comparatively poor. There are no systematic design methods and current methods are difficult to deal with uncertainties. In this paper, an adaptive back stepping control is proposed. The adaptive updating laws are presented to approximate the uncertainties. The proposed method improves the robust performance of controller by only two control input. The asymptotical convergence of synchronization errors is proved to zero by Lyapunov functions. Finally, simulation examples are presented to demonstrated the effectiveness of the proposed synchronization control scheme and its validity in secure communication.
Keywords: Lyapunov methods; chaotic communication; control nonlinearities; synchronisation; telecommunication security; Lyapunov functions; adaptive back stepping control; multiscroll hyperchaotic systems; secure communication; synchronization control scheme; systematic design methods; Adaptive control; Backstepping; Chaotic communication; Synchronization; adaptive control; backstepping; hyperchaos; multi-scroll (ID#: 16-10299)


I. Kiss, B. Genge and P. Haller, “A Clustering-Based Approach to Detect Cyber Attacks in Process Control Systems,” 2015 IEEE 13th International Conference on Industrial Informatics (INDIN), Cambridge, 2015, pp. 142-148. doi: 10.1109/INDIN.2015.7281725
Abstract: Modern Process Control Systems (PCS) exhibit an increasing trend towards the pervasive adoption of commodity, off-the-shelf Information and Communication Technologies (ICT). This has brought significant economical and operational benefits, but it also shifted the architecture of PCS from a completely isolated environment to an open, “system of systems” integration with traditional ICT systems, susceptible to traditional computer attacks. In this paper we present a novel approach to detect cyber attacks targeting measurements sent to control hardware, i.e., typically to Programmable Logical Controllers (PLC). The approach builds on the Gaussian mixture model to cluster sensor measurement values and a cluster assessment technique known as silhouette. We experimentally demonstrate that in this particular problem the Gaussian mixture clustering outperforms the k-means clustering algorithm. The effectiveness of the proposed technique is tested in a scenario involving the simulated Tennessee-Eastman chemical process and three different cyber attacks.
Keywords: Gaussian processes; control engineering computing; mixture models; pattern clustering; process control; production engineering computing; programmable controllers security of data; Gaussian mixture model; ICT systems; Information and Communication Technologies; PCS; PLC; cluster assessment technique; cluster sensor measurement values; computer attacks; cyber attack detection; process control systems; programmable logical controllers; silhouette; simulated Tennessee-Eastman chemical process; system of systems integration; Clustering algorithms; Computer crime; Engines; Mathematical model; Process control (ID#: 16-10300)


H. Wu, X. Dang, L. Zhang and L. Wang, “Kalman Filter Based DNS Cache Poisoning Attack Detection,” 2015 IEEE International Conference on Automation Science and Engineering (CASE), Gothenburg, 2015, pp. 1594-1600. doi: 10.1109/CoASE.2015.7294328
Abstract: Detection for Domain Name Systems cache poisoning attack is investigated. We exploit the fact that when attack is happening, the entropies of the query packet IP addresses of the cache server will have a decrease, to detect the cache poisoning attack. We pay attention to the detection method for the case that the entropy sequence has nonstationary dynamic at normal cases. In order to handle the nonstationarity, we first model the entropy sequence by a state space equation, and then we utilize Kalman filter to implement the attack detection. The problem is discussed for single and distributed cache poisoning attack, respectively. For the single one, we use the measurement errors to detect the anomaly. Under distributed attack, we utilize the correlation variation of the prediction errors to detect the attack event and identify the attacked cache servers. An experiment is illustrated to verify the effectiveness of our presented method.
Keywords: IP networks; Kalman filters; cache storage; computer network security; entropy; file servers; query processing; Kalman filter based DNS cache poisoning attack detection; attack event; attacked cache servers; correlation variation; domain name systems; entropy sequence; measurement errors; query packet IP addresses; state space equation; Correlation; Entropy; Mathematical model; Servers (ID#: 16-10301)


R. Cao, J. Wu, C. Long and S. Li, “Stability Analysis for Networked Control Systems Under Denial-of-Service Attacks,” 2015 54th IEEE Conference on Decision and Control (CDC), Osaka, 2015, pp. 7476-7481. doi: 10.1109/CDC.2015.7403400
Abstract: With the large-scale application of modern information technology in networked control systems (NCSs), the security of networked control systems has drawn more and more attention in recent years. However, how far the NCSs can be affected by adversaries are few considered. In this paper, we consider a stability problem for NCSs under denial-of-service (DoS) attacks in which control and measurement packets are transmitted over the communication networks. We model the NCSs under DoS attacks as a singular system, where the effect of the DOS attack is described as a time-varying delay. By a Wirtinger-based integral inequality, a less conservative attack-based delay-dependent criterion for NCSs' stability is obtained in term of linear matrix inequalities (LMIs). Finally, examples are given to illustrate the effectiveness of our methods.
Keywords: delays; linear matrix inequalities; networked control systems; stability; time-varying systems; DoS attacks; LMI; NCS stability; Wirtinger-based integral inequality; attack-based delay-dependent criterion; communication networks; control packets; denial-of-service attacks; large-scale application; measurement packets; stability analysis; stability problem; time-varying delay; Computer crime; Delays; Networked control systems; Power system stability; Stability criteria; Symmetric matrices (ID#: 16-10302)


M. Wang, X. Wu, D. Liu, C. Wang, T. Zhang and P. Wang, “A Human Motion Prediction Algorithm for Non-Binding Lower Extremity Exoskeleton,” Information and Automation, 2015 IEEE International Conference on, Lijiang, 2015, pp. 369-374. doi: 10.1109/ICInfA.2015.7279315
Abstract: This paper introduces a novel approach to predict human motion for the Non-binding Lower Extremity Exoskeleton (NBLEX). Most of the exoskeletons must be attached to the pilot, which exists potential security problems. In order to solve these problems, the NBLEX is studied and designed to free pilots from the exoskeletons. Rather than applying Electromyography (EMG) and Ground Reaction Force (GFR) signals to predict human motion in the binding exoskeleton, the non-binding exoskeleton robot collect the Inertial Measurement Unit (IMU) signals of the pilot. Seven basic motions are studied, each motion is divided into four phases except the standing-still motion which only has one motion phase. The human motion prediction algorithm adopts Support Vector Machine (SVM) to classify human motion phases and Hidden Markov Model (HMM) to predict human motion. The experimental data demonstrate the effectiveness of the proposed algorithm.
Keywords: control engineering computing; hidden Markov models; mobile robots; motion control; support vector machines; EMG signal; GFR signal; HMM; IMU signal; NBLEX; SVM; electromyography; ground reaction force signal; hidden Markov model; human motion phase; human motion prediction algorithm; inertial measurement unit signal; nonbinding exoskeleton robot; nonbinding lower extremity exoskeleton; standing-still motion; support vector machine; Accuracy; Classification algorithms; Exoskeletons; Hidden Markov models; Prediction algorithms; Support vector machines; Training; Exoskeleton; Hidden Markov Model; Human Motion Prediction; Non-binding Lower Extremity Exoskeleton; Support Vector Machine (ID#: 16-10303)


M. Ingels, A. Valjarevic and H. S. Venter, “Evaluation and Analysis of a Software Prototype for Guidance and Implementation of a Standardized Digital Forensic Investigation Process,” Information Security for South Africa (ISSA), 2015, Johannesburg, 2015, pp. 1-8. doi: 10.1109/ISSA.2015.7335052
Abstract: Performing a digital forensic investigation requires a standardized and formalized process to be followed. The authors have contributed to the creation of an international standard on digital forensic investigation process, namely ISO/IEC 27043:2015, which was published in 2015. However, currently, there exists no application that would guide a digital forensic investigator to implement such a standardized process. The prototype of such an application has been developed by the authors and presented in their previous work. The prototype is in the form of a software application which has two main functionalities. The first functionality is to act as an expert system that can be used for guidance and training of novice investigators. The second functionality is to enable reliable logging of all actions taken within the investigation processes, enabling the validation of use of a correct process. The benefits of such a prototype include possible improvement in efficiency and effectiveness of an investigation and easier training of novice investigators. The last, and possibly most important benefit, includes that higher admissibility of digital evidence will be possible due to the fact that it will be easier to show that the standardized process was followed. This paper presents an evaluation of the prototype. Evaluation was performed in order to measure the usability and the quality of the prototype software, as well as the effectiveness of the prototype. The evaluation of the prototype consisted of two main parts. The first part was a software usability evaluation, which was performed using the Software Usability Measurement Inventory (SUMI), a reliable method of measuring software usability and quality. The second part of evaluation was in a form of a questionnaire set up by the authors, with the aim to evaluate whether the prototype meets its goals. The results indicated that the prototype reaches most of its goals, that it does have intended functionalities and that it is relatively easy to learn and use. Areas of improvement and future work were also identified in this work.
Keywords: digital forensics; software performance evaluation; software prototyping; software quality; ISO/lEC 27043: 2015; SUMI; digital forensic investigation process; software prototype analysis; software prototype evaluation; software quality; software usability evaluation; software usability measurement inventory; Cryptography; Libraries; Organizations; Software; Standards organizations; Yttrium; digital forensic investigation process model;  implementation prototype; software evaluation; standardization (ID#: 16-10304)


J. G. Cui, P. J. Zhou, M. y. Yu, C. Liu and X. y. Xu, “Research on Time Optimization Algorithm of Aircraft Support Activity with Limited Resources,” 2015 Fifth International Conference on Instrumentation and Measurement, Computer, Communication and Control (IMCCC), Qinhuangdao, 2015, pp. 1298-1303. doi: 10.1109/IMCCC.2015.279
Abstract: The required time of aircraft turnaround support activity directly affects the aircraft combat effectiveness. Aiming at the problem that the shortest time of support activity is hard to realize under the limited aircraft support resources conditions, the time optimization algorithm based on Branch and Cut Method (BCM) of the aircraft turnaround support activity is given in this paper. The purpose is to achieve the required shortest time of the aircraft turnaround support activity. The constraints are logical relationship between the limited support personnel and the support jobs. The shortest time process is calculated and compiled into computer program. The time optimal simulation system of aircraft turnaround support activity is designed and developed. Finally, a certain type of aircraft real support job is analyzed. The results show that the calculated result is accurate and reliable. It is in line with the actual security and can provide guidance for aircraft turnaround support and decision-making. The reliability and automation level of support activities are enhanced. It has a good application value in engineering.
Keywords: aircraft; decision making; optimisation; reliability theory; resource allocation; tree searching; BCM; aircraft turnaround support activity; automation level; branch and cut method; reliability level; resource limitation; time optimal simulation system; time optimization algorithm; Aerospace electronics; Aircraft; Aircraft manufacture; Atmospheric modeling; Mathematical model; Optimization; Personnel; Branch and Cut Method; Limited resources; Simulation; Support activity time (ID#: 16-10305)


J. M. G. Duarte, E. Cerqueira and L. A. Villas, “Indoor Patient Monitoring Through Wi-Fi and Mobile Computing,” 2015 7th International Conference on New Technologies, Mobility and Security (NTMS), Paris, 2015, pp. 1-5. doi: 10.1109/NTMS.2015.7266497
Abstract: The developments in wireless sensor networks, mobile technology and cloud computing have been pushing forward the concept of intelligent or smart cities, and each day smarter infrastructures are being developed with the aim of enhancing the well-being of citizens. These advances in technology can provide considerable benefits for the diverse components of smart cities including smart health which can be seen as the facet of smart cities dedicated to healthcare. A considerable defy that stills requiring appropriate responses is the development of mechanisms to detect health issues in patients from the very beginning. In this work, we propose a novel solution for indoor patient monitoring for medical purposes. The output of our solution will consist of a report containing the patterns of room occupation by the patient inside her/his home during a certain period of time. This report will allow health care professionals to detect changes on the behavior of the patient that can be interpreted as early signs of any health related issue. The proposed solution was implemented in an Android smartphone and tested in a real scenario. To assess our solution, 400 measurements divided into 10 experiments were performed, reaching a total of 391 correct detections which corresponds to an average effectiveness of 97.75%.
Keywords: cloud computing; indoor radio; mobile computing; patient monitoring; smart cities; smart phones; wireless LAN; wireless sensor networks; Android smartphone; Wi-Fi; indoor patient monitoring; intelligent cities; smart health; wireless sensor networks; IEEE 802.11 Standard; Medical services; Mobile communication; Mobile computing; Monitoring; Sensors; Wireless sensor networks; Behavior; Indoor monitoring; Patient; Smart health; Smartphone; Wi-Fi (ID#: 16-10306)


Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.