Xiong, Yi, Li, Zhongkui.
2020.
Privacy Preserving Average Consensus by Adding Edge-based Perturbation Signals. 2020 IEEE Conference on Control Technology and Applications (CCTA). :712—717.
In this paper, the privacy preserving average consensus problem of multi-agent systems with strongly connected and weight balanced graph is considered. In most existing consensus algorithms, the agents need to exchange their state information, which leads to the disclosure of their initial states. This might be undesirable because agents' initial states may contain some important and sensitive information. To solve the problem, we propose a novel distributed algorithm, which can guarantee average consensus and meanwhile preserve the agents' privacy. This algorithm assigns some additive perturbation signals on the communication edges and these perturbations signals will be added to original true states for information exchanging. This ensures that direct disclosure of initial states can be avoided. Then a rigid analysis of our algorithm's privacy preserving performance is provided. For any individual agent in the network, we present a necessary and sufficient condition under which its privacy is preserved. The effectiveness of our algorithm is demonstrated by a numerical simulation.
Das, Sima, Panda, Ganapati.
2020.
An Initiative Towards Privacy Risk Mitigation Over IoT Enabled Smart Grid Architecture. 2020 International Conference on Renewable Energy Integration into Smart Grids: A Multidisciplinary Approach to Technology Modelling and Simulation (ICREISG). :168—173.
The Internet of Things (IoT) has transformed many application domains with realtime, continuous, automated control and information transmission. The smart grid is one such futuristic application domain in execution, with a large-scale IoT network as its backbone. By leveraging the functionalities and characteristics of IoT, the smart grid infrastructure benefits not only consumers, but also service providers and power generation organizations. The confluence of IoT and smart grid comes with its own set of challenges. The underlying cyberspace of IoT, though facilitates communication (information propagation) among devices of smart grid infrastructure, it undermines the privacy at the same time. In this paper we propose a new measure for quantifying the probability of privacy leakage based on the behaviors of the devices involved in the communication process. We construct a privacy stochastic game model based on the information shared by the device, and the access to the compromised device. The existence of Nash Equilibrium strategy of the game is proved theoretically. We experimentally validate the effectiveness of the privacy stochastic game model.
Yazdani, Kasra, Hale, Matthew.
2020.
Error Bounds and Guidelines for Privacy Calibration in Differentially Private Kalman Filtering. 2020 American Control Conference (ACC). :4423—4428.
Differential privacy has emerged as a formal framework for protecting sensitive information in control systems. One key feature is that it is immune to post-processing, which means that arbitrary post-hoc computations can be performed on privatized data without weakening differential privacy. It is therefore common to filter private data streams. To characterize this setup, in this paper we present error and entropy bounds for Kalman filtering differentially private state trajectories. We consider systems in which an output trajectory is privatized in order to protect the state trajectory that produced it. We provide bounds on a priori and a posteriori error and differential entropy of a Kalman filter which is processing the privatized output trajectories. Using the error bounds we develop, we then provide guidelines to calibrate privacy levels in order to keep filter error within pre-specified bounds. Simulation results are presented to demonstrate these developments.
Anbumani, P., Dhanapal, R..
2020.
Review on Privacy Preservation Methods in Data Mining Based on Fuzzy Based Techniques. 2020 2nd International Conference on Advances in Computing, Communication Control and Networking (ICACCCN). :689—694.
The most significant motivation behind calculations in data mining will play out excavation on incomprehensible past examples since the extremely large data size. During late occasions there are numerous phenomenal improvements in data assembling because of the advancement in the field of data innovation. Lately, Privacy issues in data Preservation didn't get a lot of consideration in the process mining network; nonetheless, a few protection safeguarding procedures in data change strategies have been proposed in the data mining network. There are more normal distinction between data mining and cycle mining exist yet there are key contrasts that make protection safeguarding data mining methods inadmissible to mysterious cycle data. Results dependent on the data mining calculation can be utilized in different regions, for example, Showcasing, climate estimating and Picture Examination. It is likewise uncovered that some delicate data has a result of the mining calculation. Here we can safeguard the Privacy by utilizing PPT (Privacy Preservation Techniques) strategies. Important Concept in data mining is privacy preservation Techniques (PPT) because data exchanged between different persons needs security, so that other persons didn't know what actual data transferred between the actual persons. Preservation in data mining deals that not showing the output information / data in the data mining by using various methods while the output data is precious. There are two techniques used for privacy preservation techniques. One is to alter the input information / data and another one is to alter the output information / data. The method is proposed for protection safeguarding in data base environmental factors is data change. This capacity has fuzzy three-sided participation with this strategy for data change to change the first data collection.
Priyanka, J., Rajeshwari, K.Raja, Ramakrishnan, M..
2020.
Operative Access Regulator for Attribute Based Generalized Signcryption Using Rough Set Theory. 2020 International Conference on Electronics and Sustainable Communication Systems (ICESC). :458—460.
The personal health record has been shared and preserved easily with cloud core storage. Privacy and security have been one of the main demerits of core CloudHealthData storage. By increasing the security concerns in this paper experimented Operative Access Regulator for Attribute Based Generalized Signcryption Using rough set theory. By using rough set theory, the classifications of the attribute have been improved as well as the compulsory attribute has been formatted for decrypting process by using reduct and core. The Generalized signcryption defined priority wise access to diminish the cost and rise the effectiveness of the proposed model. The PHR has been stored under the access priorities of Signature only, encryption only and signcryption only mode. The proposed ABGS performance fulfills the secrecy, authentication and also other security principles.
Xu, Yizheng.
2020.
Application Research Based on Machine Learning in Network Privacy Security. 2020 International Conference on Computer Information and Big Data Applications (CIBDA). :237—240.
As the hottest frontier technology in the field of artificial intelligence, machine learning is subverting various industries step by step. In the future, it will penetrate all aspects of our lives and become an indispensable technology around us. Among them, network security is an area where machine learning can show off its strengths. Among many network security problems, privacy protection is a more difficult problem, so it needs more introduction of new technologies, new methods and new ideas such as machine learning to help solve some problems. The research contents for this include four parts: an overview of machine learning, the significance of machine learning in network security, the application process of machine learning in network security research, and the application of machine learning in privacy protection. It focuses on the issues related to privacy protection and proposes to combine the most advanced matching algorithm in deep learning methods with information theory data protection technology, so as to introduce it into biometric authentication. While ensuring that the loss of matching accuracy is minimal, a high-standard privacy protection algorithm is concluded, which enables businesses, government entities, and end users to more widely accept privacy protection technology.
Gohari, Parham, Hale, Matthew, Topcu, Ufuk.
2020.
Privacy-Preserving Policy Synthesis in Markov Decision Processes. 2020 59th IEEE Conference on Decision and Control (CDC). :6266—6271.
In decision-making problems, the actions of an agent may reveal sensitive information that drives its decisions. For instance, a corporation's investment decisions may reveal its sensitive knowledge about market dynamics. To prevent this type of information leakage, we introduce a policy synthesis algorithm that protects the privacy of the transition probabilities in a Markov decision process. We use differential privacy as the mathematical definition of privacy. The algorithm first perturbs the transition probabilities using a mechanism that provides differential privacy. Then, based on the privatized transition probabilities, we synthesize a policy using dynamic programming. Our main contribution is to bound the "cost of privacy," i.e., the difference between the expected total rewards with privacy and the expected total rewards without privacy. We also show that computing the cost of privacy has time complexity that is polynomial in the parameters of the problem. Moreover, we establish that the cost of privacy increases with the strength of differential privacy protections, and we quantify this increase. Finally, numerical experiments on two example environments validate the established relationship between the cost of privacy and the strength of data privacy protections.
Gursoy, M. Emre, Rajasekar, Vivekanand, Liu, Ling.
2020.
Utility-Optimized Synthesis of Differentially Private Location Traces. 2020 Second IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications (TPS-ISA). :30—39.
Differentially private location trace synthesis (DPLTS) has recently emerged as a solution to protect mobile users' privacy while enabling the analysis and sharing of their location traces. A key challenge in DPLTS is to best preserve the utility in location trace datasets, which is non-trivial considering the high dimensionality, complexity and heterogeneity of datasets, as well as the diverse types and notions of utility. In this paper, we present OptaTrace: a utility-optimized and targeted approach to DPLTS. Given a real trace dataset D, the differential privacy parameter ε controlling the strength of privacy protection, and the utility/error metric Err of interest; OptaTrace uses Bayesian optimization to optimize DPLTS such that the output error (measured in terms of given metric Err) is minimized while ε-differential privacy is satisfied. In addition, OptaTrace introduces a utility module that contains several built-in error metrics for utility benchmarking and for choosing Err, as well as a front-end web interface for accessible and interactive DPLTS service. Experiments show that OptaTrace's optimized output can yield substantial utility improvement and error reduction compared to previous work.
Wang, Lei, Manchester, Ian R., Trumpf, Jochen, Shi, Guodong.
2020.
Initial-Value Privacy of Linear Dynamical Systems. 2020 59th IEEE Conference on Decision and Control (CDC). :3108—3113.
This paper studies initial-value privacy problems of linear dynamical systems. We consider a standard linear time-invariant system with random process and measurement noises. For such a system, eavesdroppers having access to system output trajectories may infer the system initial states, leading to initial-value privacy risks. When a finite number of output trajectories are eavesdropped, we consider a requirement that any guess about the initial values can be plausibly denied. When an infinite number of output trajectories are eavesdropped, we consider a requirement that the initial values should not be uniquely recoverable. In view of these two privacy requirements, we define differential initial-value privacy and intrinsic initial-value privacy, respectively, for the system as metrics of privacy risks. First of all, we prove that the intrinsic initial-value privacy is equivalent to unobservability, while the differential initial-value privacy can be achieved for a privacy budget depending on an extended observability matrix of the system and the covariance of the noises. Next, the inherent network nature of the considered linear system is explored, where each individual state corresponds to a node and the state and output matrices induce interaction and sensing graphs, leading to a network system. Under this network system perspective, we allow the initial states at some nodes to be public, and investigate the resulting intrinsic initial- value privacy of each individual node. We establish necessary and sufficient conditions for such individual node initial-value privacy, and also prove that the intrinsic initial-value privacy of individual nodes is generically determined by the network structure.