Visible to the public Biblio

Found 3835 results

Filters: Keyword is Scalability  [Clear All Filters]
2021-04-09
Mir, N., Khan, M. A. U..  2020.  Copyright Protection for Online Text Information : Using Watermarking and Cryptography. 2020 3rd International Conference on Computer Applications Information Security (ICCAIS). :1—4.
Information and security are interdependent elements. Information security has evolved to be a matter of global interest and to achieve this; it requires tools, policies and assurance of technologies against any relevant security risks. Internet influx while providing a flexible means of sharing the online information economically has rapidly attracted countless writers. Text being an important constituent of online information sharing, creates a huge demand of intellectual copyright protection of text and web itself. Various visible watermarking techniques have been studied for text documents but few for web-based text. In this paper, web page watermarking and cryptography for online content copyrights protection is proposed utilizing the semantic and syntactic rules using HTML (Hypertext Markup Language) and is tested for English and Arabic languages.
Peng, X., Hongmei, Z., Lijie, C., Ying, H..  2020.  Analysis of Computer Network Information Security under the Background of Big Data. 2020 5th International Conference on Smart Grid and Electrical Automation (ICSGEA). :409—412.
In today's society, under the comprehensive arrival of the Internet era, the rapid development of technology has facilitated people's production and life, but it is also a “double-edged sword”, making people's personal information and other data subject to a greater threat of abuse. The unique features of big data technology, such as massive storage, parallel computing and efficient query, have created a breakthrough opportunity for the key technologies of large-scale network security situational awareness. On the basis of big data acquisition, preprocessing, distributed computing and mining and analysis, the big data analysis platform provides information security assurance services to the information system. This paper will discuss the security situational awareness in large-scale network environment and the promotion of big data technology in security perception.
Bhattacharya, M. P., Zavarsky, P., Butakov, S..  2020.  Enhancing the Security and Privacy of Self-Sovereign Identities on Hyperledger Indy Blockchain. 2020 International Symposium on Networks, Computers and Communications (ISNCC). :1—7.
Self-sovereign identities provide user autonomy and immutability to individual identities and full control to their identity owners. The immutability and control are possible by implementing identities in a decentralized manner on blockchains that are specially designed for identity operations such as Hyperledger Indy. As with any type of identity, self-sovereign identities too deal with Personally Identifiable Information (PII) of the identity holders and comes with the usual risks of privacy and security. This study examined certain scenarios of personal data disclosure via credential exchanges between such identities and risks of man-in-the-middle attacks in the blockchain based identity system Hyperledger Indy. On the basis of the findings, the paper proposes the following enhancements: 1) A novel attribute sensitivity score model for self-sovereign identity agents to ascertain the sensitivity of attributes shared in credential exchanges 2) A method of mitigating man-in-the-middle attacks between peer self-sovereign identities and 3) A novel quantitative model for determining a credential issuer's reputation based on the number of issued credentials in a window period, which is then utilized to calculate an overall confidence level score for the issuer.
Lyshevski, S. E., Aved, A., Morrone, P..  2020.  Information-Centric Cyberattack Analysis and Spatiotemporal Networks Applied to Cyber-Physical Systems. 2020 IEEE Microwave Theory and Techniques in Wireless Communications (MTTW). 1:172—177.
Cyber-physical systems (CPS) depend on cybersecurity to ensure functionality, data quality, cyberattack resilience, etc. There are known and unknown cyber threats and attacks that pose significant risks. Information assurance and information security are critical. Many systems are vulnerable to intelligence exploitation and cyberattacks. By investigating cybersecurity risks and formal representation of CPS using spatiotemporal dynamic graphs and networks, this paper investigates topics and solutions aimed to examine and empower: (1) Cybersecurity capabilities; (2) Information assurance and system vulnerabilities; (3) Detection of cyber threat and attacks; (4) Situational awareness; etc. We introduce statistically-characterized dynamic graphs, novel entropy-centric algorithms and calculi which promise to ensure near-real-time capabilities.
Ozkan, N., Tarhan, A. K., Gören, B., Filiz, İ, Özer, E..  2020.  Harmonizing IT Frameworks and Agile Methods: Challenges and Solutions for the case of COBIT and Scrum. 2020 15th Conference on Computer Science and Information Systems (FedCSIS). :709—719.
Information Technology (IT) is a complex domain. In order to properly manage IT related processes, several frameworks including ITIL (Information Technologies Infrastructure Library), COBIT (Control OBjectives for Information and related Technologies), IT Service CMMI (IT Service Capability Maturity Model) and many others have emerged in recent decades. Meanwhile, the prevalence of Agile methods has increased, posing the coexistence of Agile approach with different IT frameworks already adopted in organizations. More specifically, the pursuit of being agile in the area of digitalization pushes organizations to go for agile transformation while preserving full compliance to IT frameworks for the sake of their survival. The necessity for this coexistence, however, brings its own challenges and solutions for harmonizing the requirements of both parties. In this paper, we focus on harmonizing the requirements of COBIT and Scrum in a same organization, which is especially challenging when a full compliance to COBIT is expected. Therefore, this study aims to identifying the challenges of and possible solutions for the coexistence of Scrum and COBIT (version 4.1 in this case) in an organization, by considering two case studies: one from the literature and the case of Akbank delivered in this study. Thus, it extends the corresponding previous case study from two points: adds one more case study to enrich the results from the previous case study and provides more opportunity to make generalization by considering two independent cases.
Lin, T., Shi, Y., Shu, N., Cheng, D., Hong, X., Song, J., Gwee, B. H..  2020.  Deep Learning-Based Image Analysis Framework for Hardware Assurance of Digital Integrated Circuits. 2020 IEEE International Symposium on the Physical and Failure Analysis of Integrated Circuits (IPFA). :1—6.
We propose an Artificial Intelligence (AI)/Deep Learning (DL)-based image analysis framework for hardware assurance of digital integrated circuits (ICs). Our aim is to examine and verify various hardware information from analyzing the Scanning Electron Microscope (SEM) images of an IC. In our proposed framework, we apply DL-based methods at all essential steps of the analysis. To the best of our knowledge, this is the first such framework that makes heavy use of DL-based methods at all essential analysis steps. Further, to reduce time and effort required in model re-training, we propose and demonstrate various automated or semi-automated training data preparation methods and demonstrate the effectiveness of using synthetic data to train a model. By applying our proposed framework to analyzing a set of SEM images of a large digital IC, we prove its efficacy. Our DL-based methods are fast, accurate, robust against noise, and can automate tasks that were previously performed mainly manually. Overall, we show that DL-based methods can largely increase the level of automation in hardware assurance of digital ICs and improve its accuracy.
Fourastier, Y., Baron, C., Thomas, C., Esteban, P..  2020.  Assurance levels for decision making in autonomous intelligent systems and their safety. 2020 IEEE 11th International Conference on Dependable Systems, Services and Technologies (DESSERT). :475—483.
The autonomy of intelligent systems and their safety rely on their ability for local decision making based on collected environmental information. This is even more for cyber-physical systems running safety critical activities. While this intelligence is partial and fragmented, and cognitive techniques are of limited maturity, the decision function must produce results whose validity and scope must be weighted in light of the underlying assumptions, unavoidable uncertainty and hypothetical safety limitation. Besides the cognitive techniques dependability, it is about the assurance level of the decision self-making. Beyond the pure decision-making capabilities of the autonomous intelligent system, we need techniques that guarantee the system assurance required for the intended use. Security mechanisms for cognitive systems may be consequently tightly intricated. We propose a trustworthiness module which is part of the system and its resulting safety. In this paper, we briefly review the state of the art regarding the dependability of cognitive techniques, the assurance level definition in this context, and related engineering practices. We elaborate regarding the design of autonomous intelligent systems safety, then we discuss its security design and approaches for the mitigation of safety violations by the cognitive functions.
Smith, B., Feather, M. S., Huntsberger, T., Bocchino, R..  2020.  Software Assurance of Autonomous Spacecraft Control. 2020 Annual Reliability and Maintainability Symposium (RAMS). :1—7.
Summary & Conclusions: The work described addresses assurance of a planning and execution software system being added to an in-orbit CubeSat to demonstrate autonomous control of that spacecraft. Our focus was on how to develop assurance of the correct operation of the added software in its operational context, our approach to which was to use an assurance case to guide and organize the information involved. The relatively manageable magnitude of the CubeSat and its autonomy demonstration experiment made it plausible to try out our assurance approach in a relatively short timeframe. Additionally, the time was ripe to inject useful assurance results into the ongoing development and testing of the autonomy demonstration. In conducting this, we sought to answer several questions about our assurance approach. The questions, and the conclusions we reached, are as follows: 1. Question: Would our approach to assurance apply to the introduction of a planning and execution software into an existing system? Conclusion: Yes. The use of an assurance case helped focus our attention on the more challenging aspects, notably the interactions between the added software and the existing software system into which it was being introduced. This guided us to choose a hazard analysis method specifically for software interactions. In addition, we were able to automate generation of assurance case elements from the hazard analysis' tabular representation. 2. Question: Would our methods prove understandable to the software engineers tasked with integrating the software into the CubeSat's existing system? Conclusion: Somewhat. In interim discussions with the software engineers we found the assurance case style, of decomposing an argument into smaller pieces, to be useful and understandable to organize discussion. Ultimately however we did not persuade them to adopt assurance cases as the means to present review information. We attribute this to reluctance to deviate from JPL's tried and true style of holding reviews. For the CubeSat project as a whole, hosting an autonomy demonstration was already a novelty. Combining this with presentation of review information via an assurance case, with which our reviewers would be unaccustomed, would have exacerbated the unfamiliarity. 3. Question: Would conducting our methods prove to be compatible with the (limited) time available of the software engineers? Conclusion: Yes. We used a series of six brief meetings (approximately one hour each) with the development team to first identify the interactions as the area on which to focus, and to then perform the hazard analysis on those interactions. We used the meetings to confirm, or correct as necessary, our understanding of the software system and the spacecraft context. Between meetings we studied the existing software documentation, did preliminary analyses by ourselves, and documented the results in a concise form suitable for discussion with the team. 4. Question: Would our methods yield useful results to the software engineers? Conclusion: Yes. The hazard analysis systematically confirmed existing hazards' mitigations, and drew attention to a mitigation whose implementation needed particular care. In some cases, the analysis identified potential hazards - and what to do about them - should some of the more sophisticated capabilities of the planning and execution software be used. These capabilities, not exercised in the initial experiments on the CubeSat, may be used in future experiments. We remain involved with the developers as they prepare for these future experiments, so our analysis results will be of benefit as these proceed.
2021-04-08
Boato, G., Dang-Nguyen, D., Natale, F. G. B. De.  2020.  Morphological Filter Detector for Image Forensics Applications. IEEE Access. 8:13549—13560.
Mathematical morphology provides a large set of powerful non-linear image operators, widely used for feature extraction, noise removal or image enhancement. Although morphological filters might be used to remove artifacts produced by image manipulations, both on binary and gray level documents, little effort has been spent towards their forensic identification. In this paper we propose a non-trivial extension of a deterministic approach originally detecting erosion and dilation of binary images. The proposed approach operates on grayscale images and is robust to image compression and other typical attacks. When the image is attacked the method looses its deterministic nature and uses a properly trained SVM classifier, using the original detector as a feature extractor. Extensive tests demonstrate that the proposed method guarantees very high accuracy in filtering detection, providing 100% accuracy in discriminating the presence and the type of morphological filter in raw images of three different datasets. The achieved accuracy is also good after JPEG compression, equal or above 76.8% on all datasets for quality factors above 80. The proposed approach is also able to determine the adopted structuring element for moderate compression factors. Finally, it is robust against noise addition and it can distinguish morphological filter from other filters.
Zhang, J., Liao, Y., Zhu, X., Wang, H., Ding, J..  2020.  A Deep Learning Approach in the Discrete Cosine Transform Domain to Median Filtering Forensics. IEEE Signal Processing Letters. 27:276—280.
This letter presents a novel median filtering forensics approach, based on a convolutional neural network (CNN) with an adaptive filtering layer (AFL), which is built in the discrete cosine transform (DCT) domain. Using the proposed AFL, the CNN can determine the main frequency range closely related with the operational traces. Then, to automatically learn the multi-scale manipulation features, a multi-scale convolutional block is developed, exploring a new multi-scale feature fusion strategy based on the maxout function. The resultant features are further processed by a convolutional stream with pooling and batch normalization operations, and finally fed into the classification layer with the Softmax function. Experimental results show that our proposed approach is able to accurately detect the median filtering manipulation and outperforms the state-of-the-art schemes, especially in the scenarios of low image resolution and serious compression loss.
Mayer, O., Stamm, M. C..  2020.  Forensic Similarity for Digital Images. IEEE Transactions on Information Forensics and Security. 15:1331—1346.
In this paper, we introduce a new digital image forensics approach called forensic similarity, which determines whether two image patches contain the same forensic trace or different forensic traces. One benefit of this approach is that prior knowledge, e.g., training samples, of a forensic trace is not required to make a forensic similarity decision on it in the future. To do this, we propose a two-part deep-learning system composed of a convolutional neural network-based feature extractor and a three-layer neural network, called the similarity network. This system maps the pairs of image patches to a score indicating whether they contain the same or different forensic traces. We evaluated the system accuracy of determining whether two image patches were captured by the same or different camera model and manipulated by the same or a different editing operation and the same or a different manipulation parameter, given a particular editing operation. Experiments demonstrate applicability to a variety of forensic traces and importantly show efficacy on “unknown” forensic traces that were not used to train the system. Experiments also show that the proposed system significantly improves upon prior art, reducing error rates by more than half. Furthermore, we demonstrated the utility of the forensic similarity approach in two practical applications: forgery detection and localization, and database consistency verification.
Rhee, K. H..  2020.  Composition of Visual Feature Vector Pattern for Deep Learning in Image Forensics. IEEE Access. 8:188970—188980.
In image forensics, to determine whether the image is impurely transformed, it extracts and examines the features included in the suspicious image. In general, the features extracted for the detection of forgery images are based on numerical values, so it is somewhat unreasonable to use in the CNN structure for image classification. In this paper, the extraction method of a feature vector is using a least-squares solution. Treat a suspicious image like a matrix and its solution to be coefficients as the feature vector. Get two solutions from two images of the original and its median filter residual (MFR). Subsequently, the two features were formed into a visualized pattern and then fed into CNN deep learning to classify the various transformed images. A new structure of the CNN net layer was also designed by hybrid with the inception module and the residual block to classify visualized feature vector patterns. The performance of the proposed image forensics detection (IFD) scheme was measured with the seven transformed types of image: average filtered (window size: 3 × 3), gaussian filtered (window size: 3 × 3), JPEG compressed (quality factor: 90, 70), median filtered (window size: 3 × 3, 5 × 5), and unaltered. The visualized patterns are fed into the image input layer of the designed CNN hybrid model. Throughout the experiment, the accuracy of median filtering detection was 98% over. Also, the area under the curve (AUC) by sensitivity (TP: true positive rate) and 1-specificity (FP: false positive rate) results of the proposed IFD scheme approached to `1' on the designed CNN hybrid model. Experimental results show high efficiency and performance to classify the various transformed images. Therefore, the grade evaluation of the proposed scheme is “Excellent (A)”.
Guerrini, F., Dalai, M., Leonardi, R..  2020.  Minimal Information Exchange for Secure Image Hash-Based Geometric Transformations Estimation. IEEE Transactions on Information Forensics and Security. 15:3482—3496.
Signal processing applications dealing with secure transmission are enjoying increasing attention lately. This paper provides some theoretical insights as well as a practical solution for transmitting a hash of an image to a central server to be compared with a reference image. The proposed solution employs a rigid image registration technique viewed in a distributed source coding perspective. In essence, it embodies a phase encoding framework to let the decoder estimate the transformation parameters using a very modest amount of information about the original image. The problem is first cast in an ideal setting and then it is solved in a realistic scenario, giving more prominence to low computational complexity in both the transmitter and receiver, minimal hash size, and hash security. Satisfactory experimental results are reported on a standard images set.
Zheng, Y., Cao, Y., Chang, C..  2020.  A PUF-Based Data-Device Hash for Tampered Image Detection and Source Camera Identification. IEEE Transactions on Information Forensics and Security. 15:620—634.
With the increasing prevalent of digital devices and their abuse for digital content creation, forgeries of digital images and video footage are more rampant than ever. Digital forensics is challenged into seeking advanced technologies for forgery content detection and acquisition device identification. Unfortunately, existing solutions that address image tampering problems fail to identify the device that produces the images or footage while techniques that can identify the camera is incapable of locating the tampered content of its captured images. In this paper, a new perceptual data-device hash is proposed to locate maliciously tampered image regions and identify the source camera of the received image data as a non-repudiable attestation in digital forensics. The presented image may have been either tampered or gone through benign content preserving geometric transforms or image processing operations. The proposed image hash is generated by projecting the invariant image features into a physical unclonable function (PUF)-defined Bernoulli random space. The tamper-resistant random PUF response is unique for each camera and can only be generated upon triggered by a challenge, which is provided by the image acquisition timestamp. The proposed hash is evaluated on the modified CASIA database and CMOS image sensor-based PUF simulated using 180 nm TSMC technology. It achieves a high tamper detection rate of 95.42% with the regions of tampered content successfully located, a good authentication performance of above 98.5% against standard content-preserving manipulations, and 96.25% and 90.42%, respectively, for the more challenging geometric transformations of rotation (0 360°) and scaling (scale factor in each dimension: 0.5). It is demonstrated to be able to identify the source camera with 100% accuracy and is secure against attacks on PUF.
Al-Dhaqm, A., Razak, S. A., Ikuesan, R. A., Kebande, V. R., Siddique, K..  2020.  A Review of Mobile Forensic Investigation Process Models. IEEE Access. 8:173359—173375.
Mobile Forensics (MF) field uses prescribed scientific approaches with a focus on recovering Potential Digital Evidence (PDE) from mobile devices leveraging forensic techniques. Consequently, increased proliferation, mobile-based services, and the need for new requirements have led to the development of the MF field, which has in the recent past become an area of importance. In this article, the authors take a step to conduct a review on Mobile Forensics Investigation Process Models (MFIPMs) as a step towards uncovering the MF transitions as well as identifying open and future challenges. Based on the study conducted in this article, a review of the literature revealed that there are a few MFIPMs that are designed for solving certain mobile scenarios, with a variety of concepts, investigation processes, activities, and tasks. A total of 100 MFIPMs were reviewed, to present an inclusive and up-to-date background of MFIPMs. Also, this study proposes a Harmonized Mobile Forensic Investigation Process Model (HMFIPM) for the MF field to unify and structure whole redundant investigation processes of the MF field. The paper also goes the extra mile to discuss the state of the art of mobile forensic tools, open and future challenges from a generic standpoint. The results of this study find direct relevance to forensic practitioners and researchers who could leverage the comprehensiveness of the developed processes for investigation.
Verdoliva, L..  2020.  Media Forensics and DeepFakes: An Overview. IEEE Journal of Selected Topics in Signal Processing. 14:910—932.
With the rapid progress in recent years, techniques that generate and manipulate multimedia content can now provide a very advanced level of realism. The boundary between real and synthetic media has become very thin. On the one hand, this opens the door to a series of exciting applications in different fields such as creative arts, advertising, film production, and video games. On the other hand, it poses enormous security threats. Software packages freely available on the web allow any individual, without special skills, to create very realistic fake images and videos. These can be used to manipulate public opinion during elections, commit fraud, discredit or blackmail people. Therefore, there is an urgent need for automated tools capable of detecting false multimedia content and avoiding the spread of dangerous false information. This review paper aims to present an analysis of the methods for visual media integrity verification, that is, the detection of manipulated images and videos. Special emphasis will be placed on the emerging phenomenon of deepfakes, fake media created through deep learning tools, and on modern data-driven forensic methods to fight them. The analysis will help highlight the limits of current forensic tools, the most relevant issues, the upcoming challenges, and suggest future directions for research.
Guo, T., Zhou, R., Tian, C..  2020.  On the Information Leakage in Private Information Retrieval Systems. IEEE Transactions on Information Forensics and Security. 15:2999—3012.
We consider information leakage to the user in private information retrieval (PIR) systems. Information leakage can be measured in terms of individual message leakage or total leakage. Individual message leakage, or simply individual leakage, is defined as the amount of information that the user can obtain on any individual message that is not being requested, and the total leakage is defined as the amount of information that the user can obtain about all the other messages except the one being requested. In this work, we characterize the tradeoff between the minimum download cost and the individual leakage, and that for the total leakage, respectively. Coding schemes are proposed to achieve these optimal tradeoffs, which are also shown to be optimal in terms of the message size. We further characterize the optimal tradeoff between the minimum amount of common randomness and the total leakage. Moreover, we show that under individual leakage, common randomness is in fact unnecessary when there are more than two messages.
Al-Dhaqm, A., Razak, S. A., Dampier, D. A., Choo, K. R., Siddique, K., Ikuesan, R. A., Alqarni, A., Kebande, V. R..  2020.  Categorization and Organization of Database Forensic Investigation Processes. IEEE Access. 8:112846—112858.
Database forensic investigation (DBFI) is an important area of research within digital forensics. It's importance is growing as digital data becomes more extensive and commonplace. The challenges associated with DBFI are numerous, and one of the challenges is the lack of a harmonized DBFI process for investigators to follow. In this paper, therefore, we conduct a survey of existing literature with the hope of understanding the body of work already accomplished. Furthermore, we build on the existing literature to present a harmonized DBFI process using design science research methodology. This harmonized DBFI process has been developed based on three key categories (i.e. planning, preparation and pre-response, acquisition and preservation, and analysis and reconstruction). Furthermore, the DBFI has been designed to avoid confusion or ambiguity, as well as providing practitioners with a systematic method of performing DBFI with a higher degree of certainty.
Yang, Z., Sun, Q., Zhang, Y., Zhu, L., Ji, W..  2020.  Inference of Suspicious Co-Visitation and Co-Rating Behaviors and Abnormality Forensics for Recommender Systems. IEEE Transactions on Information Forensics and Security. 15:2766—2781.
The pervasiveness of personalized collaborative recommender systems has shown the powerful capability in a wide range of E-commerce services such as Amazon, TripAdvisor, Yelp, etc. However, fundamental vulnerabilities of collaborative recommender systems leave space for malicious users to affect the recommendation results as the attackers desire. A vast majority of existing detection methods assume certain properties of malicious attacks are given in advance. In reality, improving the detection performance is usually constrained due to the challenging issues: (a) various types of malicious attacks coexist, (b) limited representations of malicious attack behaviors, and (c) practical evidences for exploring and spotting anomalies on real-world data are scarce. In this paper, we investigate a unified detection framework in an eye for an eye manner without being bothered by the details of the attacks. Firstly, co-visitation and co-rating graphs are constructed using association rules. Then, attribute representations of nodes are empirically developed from the perspectives of linkage pattern, structure-based property and inherent association of nodes. Finally, both attribute information and connective coherence of graph are combined in order to infer suspicious nodes. Extensive experiments on both synthetic and real-world data demonstrate the effectiveness of the proposed detection approach compared with competing benchmarks. Additionally, abnormality forensics metrics including distribution of rating intention, time aggregation of suspicious ratings, degree distributions before as well as after removing suspicious nodes and time series analysis of historical ratings, are provided so as to discover interesting findings such as suspicious nodes (items or ratings) on real-world data.
Tyagi, H., Vardy, A..  2015.  Universal Hashing for Information-Theoretic Security. Proceedings of the IEEE. 103:1781–1795.
The information-theoretic approach to security entails harnessing the correlated randomness available in nature to establish security. It uses tools from information theory and coding and yields provable security, even against an adversary with unbounded computational power. However, the feasibility of this approach in practice depends on the development of efficiently implementable schemes. In this paper, we review a special class of practical schemes for information-theoretic security that are based on 2-universal hash families. Specific cases of secret key agreement and wiretap coding are considered, and general themes are identified. The scheme presented for wiretap coding is modular and can be implemented easily by including an extra preprocessing layer over the existing transmission codes.
Sarkar, M. Z. I., Ratnarajah, T..  2010.  Information-theoretic security in wireless multicasting. International Conference on Electrical Computer Engineering (ICECE 2010). :53–56.
In this paper, a wireless multicast scenario is considered in which the transmitter sends a common message to a group of client receivers through quasi-static Rayleigh fading channel in the presence of an eavesdropper. The communication between transmitter and each client receiver is said to be secured if the eavesdropper is unable to decode any information. On the basis of an information-theoretic formulation of the confidential communications between transmitter and a group of client receivers, we define the expected secrecy sum-mutual information in terms of secure outage probability and provide a complete characterization of maximum transmission rate at which the eavesdropper is unable to decode any information. Moreover, we find the probability of non-zero secrecy mutual information and present an analytical expression for ergodic secrecy multicast mutual information of the proposed model.
Venkitasubramaniam, P., Yao, J., Pradhan, P..  2015.  Information-Theoretic Security in Stochastic Control Systems. Proceedings of the IEEE. 103:1914–1931.
Infrastructural systems such as the electricity grid, healthcare, and transportation networks today rely increasingly on the joint functioning of networked information systems and physical components, in short, on cyber-physical architectures. Despite tremendous advances in cryptography, physical-layer security and authentication, information attacks, both passive such as eavesdropping, and active such as unauthorized data injection, continue to thwart the reliable functioning of networked systems. In systems with joint cyber-physical functionality, the ability of an adversary to monitor transmitted information or introduce false information can lead to sensitive user data being leaked or result in critical damages to the underlying physical system. This paper investigates two broad challenges in information security in cyber-physical systems (CPSs): preventing retrieval of internal physical system information through monitored external cyber flows, and limiting the modification of physical system functioning through compromised cyber flows. A rigorous analytical framework grounded on information-theoretic security is developed to study these challenges in a general stochastic control system abstraction-a theoretical building block for CPSs-with the objectives of quantifying the fundamental tradeoffs between information security and physical system performance, and through the process, designing provably secure controller policies. Recent results are presented that establish the theoretical basis for the framework, in addition to practical applications in timing analysis of anonymous systems, and demand response systems in a smart electricity grid.
Colbaugh, R., Glass, K., Bauer, T..  2013.  Dynamic information-theoretic measures for security informatics. 2013 IEEE International Conference on Intelligence and Security Informatics. :45–49.
Many important security informatics problems require consideration of dynamical phenomena for their solution; examples include predicting the behavior of individuals in social networks and distinguishing malicious and innocent computer network activities based on activity traces. While information theory offers powerful tools for analyzing dynamical processes, to date the application of information-theoretic methods in security domains has focused on static analyses (e.g., cryptography, natural language processing). This paper leverages information-theoretic concepts and measures to quantify the similarity of pairs of stochastic dynamical systems, and shows that this capability can be used to solve important problems which arise in security applications. We begin by presenting a concise review of the information theory required for our development, and then address two challenging tasks: 1.) characterizing the way influence propagates through social networks, and 2.) distinguishing malware from legitimate software based on the instruction sequences of the disassembled programs. In each application, case studies involving real-world datasets demonstrate that the proposed techniques outperform standard methods.
Cao, Z., Deng, H., Lu, L., Duan, X..  2014.  An information-theoretic security metric for future wireless communication systems. 2014 XXXIth URSI General Assembly and Scientific Symposium (URSI GASS). :1–4.
Quantitative analysis of security properties in wireless communication systems is an important issue; it helps us get a comprehensive view of security and can be used to compare the security performance of different systems. This paper analyzes the security of future wireless communication system from an information-theoretic point of view and proposes an overall security metric. We demonstrate that the proposed metric is more reasonable than some existing metrics and it is highly sensitive to some basic parameters and helpful to do fine-grained tuning of security performance.
Jin, R., He, X., Dai, H..  2019.  On the Security-Privacy Tradeoff in Collaborative Security: A Quantitative Information Flow Game Perspective. IEEE Transactions on Information Forensics and Security. 14:3273–3286.
To contest the rapidly developing cyber-attacks, numerous collaborative security schemes, in which multiple security entities can exchange their observations and other relevant data to achieve more effective security decisions, are proposed and developed in the literature. However, the security-related information shared among the security entities may contain some sensitive information and such information exchange can raise privacy concerns, especially when these entities belong to different organizations. With such consideration, the interplay between the attacker and the collaborative entities is formulated as Quantitative Information Flow (QIF) games, in which the QIF theory is adapted to measure the collaboration gain and the privacy loss of the entities in the information sharing process. In particular, three games are considered, each corresponding to one possible scenario of interest in practice. Based on the game-theoretic analysis, the expected behaviors of both the attacker and the security entities are obtained. In addition, the simulation results are presented to validate the analysis.