Visible to the public Biblio

Filters: Keyword is Control Theory and Keyword is Privacy  [Clear All Filters]
2023-05-12
Wei, Yuecen, Fu, Xingcheng, Sun, Qingyun, Peng, Hao, Wu, Jia, Wang, Jinyan, Li, Xianxian.  2022.  Heterogeneous Graph Neural Network for Privacy-Preserving Recommendation. 2022 IEEE International Conference on Data Mining (ICDM). :528–537.
Social networks are considered to be heterogeneous graph neural networks (HGNNs) with deep learning technological advances. HGNNs, compared to homogeneous data, absorb various aspects of information about individuals in the training stage. That means more information has been covered in the learning result, especially sensitive information. However, the privacy-preserving methods on homogeneous graphs only preserve the same type of node attributes or relationships, which cannot effectively work on heterogeneous graphs due to the complexity. To address this issue, we propose a novel heterogeneous graph neural network privacy-preserving method based on a differential privacy mechanism named HeteDP, which provides a double guarantee on graph features and topology. In particular, we first define a new attack scheme to reveal privacy leakage in the heterogeneous graphs. Specifically, we design a two-stage pipeline framework, which includes the privacy-preserving feature encoder and the heterogeneous link reconstructor with gradients perturbation based on differential privacy to tolerate data diversity and against the attack. To better control the noise and promote model performance, we utilize a bi-level optimization pattern to allocate a suitable privacy budget for the above two modules. Our experiments on four public benchmarks show that the HeteDP method is equipped to resist heterogeneous graph privacy leakage with admirable model generalization.
ISSN: 2374-8486
Lai, Chengzhe, Wang, Menghua, Zheng, Dong.  2022.  SPDT: Secure and Privacy-Preserving Scheme for Digital Twin-based Traffic Control. 2022 IEEE/CIC International Conference on Communications in China (ICCC). :144–149.
With the increasing complexity of the driving environment, more and more attention has been paid to the research on improving the intelligentization of traffic control. Among them, the digital twin-based internet of vehicle can establish a mirror system on the cloud to improve the efficiency of communication between vehicles, provide warning and safety instructions for drivers, avoid driving potential dangers. To ensure the security and effectiveness of data sharing in traffic control, this paper proposes a secure and privacy-preserving scheme for digital twin-based traffic control. Specifically, in the data uploading phase, we employ a group signature with a time-bound keys technique to realize data source authentication with efficient members revocation and privacy protection, which can ensure that data can be securely stored on cloud service providers after it synchronizes to its twin. In the data sharing stage, we employ the secure and efficient attribute-based access control technique to provide flexible and efficient data sharing, in which the parameters of a specific sub-policy can be stored during the first decryption and reused in subsequent data access containing the same sub-policy, thus reducing the computing complexity. Finally, we analyze the security and efficiency of the scheme theoretically.
ISSN: 2377-8644
Zhang, Qirui, Meng, Siqi, Liu, Kun, Dai, Wei.  2022.  Design of Privacy Mechanism for Cyber Physical Systems: A Nash Q-learning Approach. 2022 China Automation Congress (CAC). :6361–6365.

This paper studies the problem of designing optimal privacy mechanism with less energy cost. The eavesdropper and the defender with limited resources should choose which channel to eavesdrop and defend, respectively. A zero-sum stochastic game framework is used to model the interaction between the two players and the game is solved through the Nash Q-learning approach. A numerical example is given to verify the proposed method.

ISSN: 2688-0938

Luo, Man, Yan, Hairong.  2022.  A graph anonymity-based privacy protection scheme for smart city scenarios. 2022 IEEE 6th Advanced Information Technology, Electronic and Automation Control Conference (IAEAC ). :489–492.
The development of science and technology has led to the construction of smart cities, and in this scenario, there are many applications that need to provide their real-time location information, which is very likely to cause the leakage of personal location privacy. To address this situation, this paper designs a location privacy protection scheme based on graph anonymity, which is based on the privacy protection idea of K-anonymity, and represents the spatial distribution among APs in the form of a graph model, using the method of finding clustered noisy fingerprint information in the graph model to ensure a similar performance to the real location fingerprint in the localization process, and thus will not be distinguished by the location providers. Experiments show that this scheme can improve the effectiveness of virtual locations and reduce the time cost using greedy strategy, which can effectively protect location privacy.
ISSN: 2689-6621
Yu, Juan.  2022.  Research on Location Information and Privacy Protection Based on Big Data. 2022 International Conference on Industrial IoT, Big Data and Supply Chain (IIoTBDSC). :226–229.

In the context of big data era, in order to prevent malicious access and information leakage during data services, researchers put forward a location big data encryption method based on privacy protection in practical exploration. According to the problems arising from the development of information network in recent years, users often encounter the situation of randomly obtaining location information in the network environment, which not only threatens their privacy security, but also affects the effective transmission of information. Therefore, this study proposed the privacy protection as the core position of big data encryption method, must first clear position with large data representation and positioning information, distinguish between processing position information and the unknown information, the fuzzy encryption theory, dynamic location data regrouping, eventually build privacy protection as the core of the encryption algorithm. The empirical results show that this method can not only effectively block the intrusion of attack data, but also effectively control the error of position data encryption.

Naseri, Amir Mohammad, Lucia, Walter, Youssef, Amr.  2022.  A Privacy Preserving Solution for Cloud-Enabled Set-Theoretic Model Predictive Control. 2022 European Control Conference (ECC). :894–899.
Cloud computing solutions enable Cyber-Physical Systems (CPSs) to utilize significant computational resources and implement sophisticated control algorithms even if limited computation capabilities are locally available for these systems. However, such a control architecture suffers from an important concern related to the privacy of sensor measurements and the computed control inputs within the cloud. This paper proposes a solution that allows implementing a set-theoretic model predictive controller on the cloud while preserving this privacy. This is achieved by exploiting the offline computations of the robust one-step controllable sets used by the controller and two affine transformations of the sensor measurements and control optimization problem. It is shown that the transformed and original control problems are equivalent (i.e., the optimal control input can be recovered from the transformed one) and that privacy is preserved if the control algorithm is executed on the cloud. Moreover, we show how the actuator can take advantage of the set-theoretic nature of the controller to verify, through simple set-membership tests, if the control input received from the cloud is admissible. The correctness of the proposed solution is verified by means of a simulation experiment involving a dual-tank water system.
Qin, Shuying, Fang, Chongrong, He, Jianping.  2022.  Towards Characterization of General Conditions for Correlated Differential Privacy. 2022 IEEE 19th International Conference on Mobile Ad Hoc and Smart Systems (MASS). :364–372.
Differential privacy is a widely-used metric, which provides rigorous privacy definitions and strong privacy guarantees. Much of the existing studies on differential privacy are based on datasets where the tuples are independent, and thus are not suitable for correlated data protection. In this paper, we focus on correlated differential privacy, by taking the data correlations and the prior knowledge of the initial data into account. The data correlations are modeled by Bayesian conditional probabilities, and the prior knowledge refers to the exact values of the data. We propose general correlated differential privacy conditions for the discrete and continuous random noise-adding mechanisms, respectively. In case that the conditions are inaccurate due to the insufficient prior knowledge, we introduce the tuple dependence based on rough set theory to improve the correlated differential privacy conditions. The obtained theoretical results reveal the relationship between the correlations and the privacy parameters. Moreover, the improved privacy condition helps strengthen the mechanism utility. Finally, evaluations are conducted over a micro-grid system to verify the privacy protection levels and utility guaranteed by correlated differential private mechanisms.
ISSN: 2155-6814
Yao, Jingshi, Yin, Xiang, Li, Shaoyuan.  2022.  Sensor Deception Attacks Against Initial-State Privacy in Supervisory Control Systems. 2022 IEEE 61st Conference on Decision and Control (CDC). :4839–4845.
This paper investigates the problem of synthesizing sensor deception attackers against privacy in the context of supervisory control of discrete-event systems (DES). We consider a plant controlled by a supervisor, which is subject to sensor deception attacks. Specifically, we consider an active attacker that can tamper with the observations received by the supervisor. The privacy requirement of the supervisory control system is to maintain initial-state opacity, i.e., it does not want to reveal the fact that it was initiated from a secret state during its operation. On the other hand, the attacker aims to deceive the supervisor, by tampering with its observations, such that initial-state opacity is violated due to incorrect control actions. We investigate from the attacker’s point of view by presenting an effective approach for synthesizing sensor attack strategies threatening the privacy of the system. To this end, we propose the All Attack Structure (AAS) that records state estimates for both the supervisor and the attacker. This structure serves as a basis for synthesizing a sensor attack strategy. We also discuss how to simplify the synthesis complexity by leveraging the structural properties. A running academic example is provided to illustrate the synthesis procedure.
ISSN: 2576-2370
Arca, Sevgi, Hewett, Rattikorn.  2022.  Anonymity-driven Measures for Privacy. 2022 6th International Conference on Cryptography, Security and Privacy (CSP). :6–10.
In today’s world, digital data are enormous due to technologies that advance data collection, storage, and analyses. As more data are shared or publicly available, privacy is of great concern. Having privacy means having control over your data. The first step towards privacy protection is to understand various aspects of privacy and have the ability to quantify them. Much work in structured data, however, has focused on approaches to transforming the original data into a more anonymous form (via generalization and suppression) while preserving the data integrity. Such anonymization techniques count data instances of each set of distinct attribute values of interest to signify the required anonymity to protect an individual’s identity or confidential data. While this serves the purpose, our research takes an alternative approach to provide quick privacy measures by way of anonymity especially when dealing with large-scale data. This paper presents a study of anonymity measures based on their relevant properties that impact privacy. Specifically, we identify three properties: uniformity, variety, and diversity, and formulate their measures. The paper provides illustrated examples to evaluate their validity and discusses the use of multi-aspects of anonymity and privacy measures.
2022-08-26
Chen, Bo, Hawkins, Calvin, Yazdani, Kasra, Hale, Matthew.  2021.  Edge Differential Privacy for Algebraic Connectivity of Graphs. 2021 60th IEEE Conference on Decision and Control (CDC). :2764—2769.
Graphs are the dominant formalism for modeling multi-agent systems. The algebraic connectivity of a graph is particularly important because it provides the convergence rates of consensus algorithms that underlie many multi-agent control and optimization techniques. However, sharing the value of algebraic connectivity can inadvertently reveal sensitive information about the topology of a graph, such as connections in social networks. Therefore, in this work we present a method to release a graph’s algebraic connectivity under a graph-theoretic form of differential privacy, called edge differential privacy. Edge differential privacy obfuscates differences among graphs’ edge sets and thus conceals the absence or presence of sensitive connections therein. We provide privacy with bounded Laplace noise, which improves accuracy relative to conventional unbounded noise. The private algebraic connectivity values are analytically shown to provide accurate estimates of consensus convergence rates, as well as accurate bounds on the diameter of a graph and the mean distance between its nodes. Simulation results confirm the utility of private algebraic connectivity in these contexts.
Bahrami, Mohammad, Jafarnejadsani, Hamidreza.  2021.  Privacy-Preserving Stealthy Attack Detection in Multi-Agent Control Systems. 2021 60th IEEE Conference on Decision and Control (CDC). :4194—4199.
This paper develops a glocal (global-local) attack detection framework to detect stealthy cyber-physical attacks, namely covert attack and zero-dynamics attack, against a class of multi-agent control systems seeking average consensus. The detection structure consists of a global (central) observer and local observers for the multi-agent system partitioned into clusters. The proposed structure addresses the scalability of the approach and the privacy preservation of the multi-agent system’s state information. The former is addressed by using decentralized local observers, and the latter is achieved by imposing unobservability conditions at the global level. Also, the communication graph model is subject to topology switching, triggered by local observers, allowing for the detection of stealthy attacks by the global observer. Theoretical conditions are derived for detectability of the stealthy attacks using the proposed detection framework. Finally, a numerical simulation is provided to validate the theoretical findings.
Russo, Alessio, Proutiere, Alexandre.  2021.  Minimizing Information Leakage of Abrupt Changes in Stochastic Systems. 2021 60th IEEE Conference on Decision and Control (CDC). :2750—2757.
This work investigates the problem of analyzing privacy of abrupt changes for general Markov processes. These processes may be affected by changes, or exogenous signals, that need to remain private. Privacy refers to the disclosure of information of these changes through observations of the underlying Markov chain. In contrast to previous work on privacy, we study the problem for an online sequence of data. We use theoretical tools from optimal detection theory to motivate a definition of online privacy based on the average amount of information per observation of the stochastic system in consideration. Two cases are considered: the full-information case, where the eavesdropper measures all but the signals that indicate a change, and the limited-information case, where the eavesdropper only measures the state of the Markov process. For both cases, we provide ways to derive privacy upper-bounds and compute policies that attain a higher privacy level. It turns out that the problem of computing privacy-aware policies is concave, and we conclude with some examples and numerical simulations for both cases.
Liu, Tianyu, Di, Boya, Wang, Shupeng, Song, Lingyang.  2021.  A Privacy-Preserving Incentive Mechanism for Federated Cloud-Edge Learning. 2021 IEEE Global Communications Conference (GLOBECOM). :1—6.
The federated learning scheme enhances the privacy preservation through avoiding the private data uploading in cloud-edge computing. However, the attacks against the uploaded model updates still cause private data leakage which demotivates the privacy-sensitive participating edge devices. Facing this issue, we aim to design a privacy-preserving incentive mechanism for the federated cloud-edge learning (PFCEL) system such that 1) the edge devices are motivated to actively contribute to the updated model uploading, 2) a trade-off between the private data leakage and the model accuracy is achieved. We formulate the incentive design problem as a three-layer Stackelberg game, where the server-device interaction is further formulated as a contract design problem. Extensive numerical evaluations demonstrate the effectiveness of our designed mechanism in terms of privacy preservation and system utility.
Zuo, Zhiqiang, Tian, Ran, Wang, Yijing.  2021.  Bipartite Consensus for Multi-Agent Systems with Differential Privacy Constraint. 2021 40th Chinese Control Conference (CCC). :5062—5067.
This paper studies the differential privacy-preserving problem of discrete-time multi-agent systems (MASs) with antagonistic information, where the connected signed graph is structurally balanced. First, we introduce the bipartite consensus definitions in the sense of mean square and almost sure, respectively. Second, some criteria for mean square and almost sure bipartite consensus are derived, where the eventualy value is related to the gauge matrix and agents’ initial states. Third, we design the ε-differential privacy algorithm and characterize the tradeoff between differential privacy and system performance. Finally, simulations validate the effectiveness of the proposed algorithm.
Sun, Zice, Wang, Yingjie, Tong, Xiangrong, Pan, Qingxian, Liu, Wenyi, Zhang, Jiqiu.  2021.  Service Quality Loss-aware Privacy Protection Mechanism in Edge-Cloud IoTs. 2021 13th International Conference on Advanced Computational Intelligence (ICACI). :207—214.
With the continuous development of edge computing, the application scope of mobile crowdsourcing (MCS) is constantly increasing. The distributed nature of edge computing can transmit data at the edge of processing to meet the needs of low latency. The trustworthiness of the third-party platform will affect the level of privacy protection, because managers of the platform may disclose the information of workers. Anonymous servers also belong to third-party platforms. For unreal third-party platforms, this paper recommends that workers first use the localized differential privacy mechanism to interfere with the real location information, and then upload it to an anonymous server to request services, called the localized differential anonymous privacy protection mechanism (LDNP). The two privacy protection mechanisms further enhance privacy protection, but exacerbate the loss of service quality. Therefore, this paper proposes to give corresponding compensation based on the authenticity of the location information uploaded by workers, so as to encourage more workers to upload real location information. Through comparative experiments on real data, the LDNP algorithm not only protects the location privacy of workers, but also maintains the availability of data. The simulation experiment verifies the effectiveness of the incentive mechanism.
Chowdhury, Sayak Ray, Zhou, Xingyu, Shroff, Ness.  2021.  Adaptive Control of Differentially Private Linear Quadratic Systems. 2021 IEEE International Symposium on Information Theory (ISIT). :485—490.
In this paper we study the problem of regret minimization in reinforcement learning (RL) under differential privacy constraints. This work is motivated by the wide range of RL applications for providing personalized service, where privacy concerns are becoming paramount. In contrast to previous works, we take the first step towards non-tabular RL settings, while providing a rigorous privacy guarantee. In particular, we consider the adaptive control of differentially private linear quadratic (LQ) systems. We develop the first private RL algorithm, Private-OFU-RL which is able to attain a sub-linear regret while guaranteeing privacy protection. More importantly, the additional cost due to privacy is only on the order of \$\textbackslashtextbackslashfrac\textbackslashtextbackslashln(1/\textbackslashtextbackslashdelta)ˆ1/4\textbackslashtextbackslashvarepsilonˆ1/2\$ given privacy parameters \$\textbackslashtextbackslashvarepsilon, \textbackslashtextbackslashdelta \textbackslashtextgreater 0\$. Through this process, we also provide a general procedure for adaptive control of LQ systems under changing regularizers, which not only generalizes previous non-private controls, but also serves as the basis for general private controls.
2021-06-02
Xiong, Yi, Li, Zhongkui.  2020.  Privacy Preserving Average Consensus by Adding Edge-based Perturbation Signals. 2020 IEEE Conference on Control Technology and Applications (CCTA). :712—717.
In this paper, the privacy preserving average consensus problem of multi-agent systems with strongly connected and weight balanced graph is considered. In most existing consensus algorithms, the agents need to exchange their state information, which leads to the disclosure of their initial states. This might be undesirable because agents' initial states may contain some important and sensitive information. To solve the problem, we propose a novel distributed algorithm, which can guarantee average consensus and meanwhile preserve the agents' privacy. This algorithm assigns some additive perturbation signals on the communication edges and these perturbations signals will be added to original true states for information exchanging. This ensures that direct disclosure of initial states can be avoided. Then a rigid analysis of our algorithm's privacy preserving performance is provided. For any individual agent in the network, we present a necessary and sufficient condition under which its privacy is preserved. The effectiveness of our algorithm is demonstrated by a numerical simulation.
Das, Sima, Panda, Ganapati.  2020.  An Initiative Towards Privacy Risk Mitigation Over IoT Enabled Smart Grid Architecture. 2020 International Conference on Renewable Energy Integration into Smart Grids: A Multidisciplinary Approach to Technology Modelling and Simulation (ICREISG). :168—173.
The Internet of Things (IoT) has transformed many application domains with realtime, continuous, automated control and information transmission. The smart grid is one such futuristic application domain in execution, with a large-scale IoT network as its backbone. By leveraging the functionalities and characteristics of IoT, the smart grid infrastructure benefits not only consumers, but also service providers and power generation organizations. The confluence of IoT and smart grid comes with its own set of challenges. The underlying cyberspace of IoT, though facilitates communication (information propagation) among devices of smart grid infrastructure, it undermines the privacy at the same time. In this paper we propose a new measure for quantifying the probability of privacy leakage based on the behaviors of the devices involved in the communication process. We construct a privacy stochastic game model based on the information shared by the device, and the access to the compromised device. The existence of Nash Equilibrium strategy of the game is proved theoretically. We experimentally validate the effectiveness of the privacy stochastic game model.
Yazdani, Kasra, Hale, Matthew.  2020.  Error Bounds and Guidelines for Privacy Calibration in Differentially Private Kalman Filtering. 2020 American Control Conference (ACC). :4423—4428.
Differential privacy has emerged as a formal framework for protecting sensitive information in control systems. One key feature is that it is immune to post-processing, which means that arbitrary post-hoc computations can be performed on privatized data without weakening differential privacy. It is therefore common to filter private data streams. To characterize this setup, in this paper we present error and entropy bounds for Kalman filtering differentially private state trajectories. We consider systems in which an output trajectory is privatized in order to protect the state trajectory that produced it. We provide bounds on a priori and a posteriori error and differential entropy of a Kalman filter which is processing the privatized output trajectories. Using the error bounds we develop, we then provide guidelines to calibrate privacy levels in order to keep filter error within pre-specified bounds. Simulation results are presented to demonstrate these developments.
Anbumani, P., Dhanapal, R..  2020.  Review on Privacy Preservation Methods in Data Mining Based on Fuzzy Based Techniques. 2020 2nd International Conference on Advances in Computing, Communication Control and Networking (ICACCCN). :689—694.
The most significant motivation behind calculations in data mining will play out excavation on incomprehensible past examples since the extremely large data size. During late occasions there are numerous phenomenal improvements in data assembling because of the advancement in the field of data innovation. Lately, Privacy issues in data Preservation didn't get a lot of consideration in the process mining network; nonetheless, a few protection safeguarding procedures in data change strategies have been proposed in the data mining network. There are more normal distinction between data mining and cycle mining exist yet there are key contrasts that make protection safeguarding data mining methods inadmissible to mysterious cycle data. Results dependent on the data mining calculation can be utilized in different regions, for example, Showcasing, climate estimating and Picture Examination. It is likewise uncovered that some delicate data has a result of the mining calculation. Here we can safeguard the Privacy by utilizing PPT (Privacy Preservation Techniques) strategies. Important Concept in data mining is privacy preservation Techniques (PPT) because data exchanged between different persons needs security, so that other persons didn't know what actual data transferred between the actual persons. Preservation in data mining deals that not showing the output information / data in the data mining by using various methods while the output data is precious. There are two techniques used for privacy preservation techniques. One is to alter the input information / data and another one is to alter the output information / data. The method is proposed for protection safeguarding in data base environmental factors is data change. This capacity has fuzzy three-sided participation with this strategy for data change to change the first data collection.
Priyanka, J., Rajeshwari, K.Raja, Ramakrishnan, M..  2020.  Operative Access Regulator for Attribute Based Generalized Signcryption Using Rough Set Theory. 2020 International Conference on Electronics and Sustainable Communication Systems (ICESC). :458—460.
The personal health record has been shared and preserved easily with cloud core storage. Privacy and security have been one of the main demerits of core CloudHealthData storage. By increasing the security concerns in this paper experimented Operative Access Regulator for Attribute Based Generalized Signcryption Using rough set theory. By using rough set theory, the classifications of the attribute have been improved as well as the compulsory attribute has been formatted for decrypting process by using reduct and core. The Generalized signcryption defined priority wise access to diminish the cost and rise the effectiveness of the proposed model. The PHR has been stored under the access priorities of Signature only, encryption only and signcryption only mode. The proposed ABGS performance fulfills the secrecy, authentication and also other security principles.
Xu, Yizheng.  2020.  Application Research Based on Machine Learning in Network Privacy Security. 2020 International Conference on Computer Information and Big Data Applications (CIBDA). :237—240.
As the hottest frontier technology in the field of artificial intelligence, machine learning is subverting various industries step by step. In the future, it will penetrate all aspects of our lives and become an indispensable technology around us. Among them, network security is an area where machine learning can show off its strengths. Among many network security problems, privacy protection is a more difficult problem, so it needs more introduction of new technologies, new methods and new ideas such as machine learning to help solve some problems. The research contents for this include four parts: an overview of machine learning, the significance of machine learning in network security, the application process of machine learning in network security research, and the application of machine learning in privacy protection. It focuses on the issues related to privacy protection and proposes to combine the most advanced matching algorithm in deep learning methods with information theory data protection technology, so as to introduce it into biometric authentication. While ensuring that the loss of matching accuracy is minimal, a high-standard privacy protection algorithm is concluded, which enables businesses, government entities, and end users to more widely accept privacy protection technology.
Gohari, Parham, Hale, Matthew, Topcu, Ufuk.  2020.  Privacy-Preserving Policy Synthesis in Markov Decision Processes. 2020 59th IEEE Conference on Decision and Control (CDC). :6266—6271.
In decision-making problems, the actions of an agent may reveal sensitive information that drives its decisions. For instance, a corporation's investment decisions may reveal its sensitive knowledge about market dynamics. To prevent this type of information leakage, we introduce a policy synthesis algorithm that protects the privacy of the transition probabilities in a Markov decision process. We use differential privacy as the mathematical definition of privacy. The algorithm first perturbs the transition probabilities using a mechanism that provides differential privacy. Then, based on the privatized transition probabilities, we synthesize a policy using dynamic programming. Our main contribution is to bound the "cost of privacy," i.e., the difference between the expected total rewards with privacy and the expected total rewards without privacy. We also show that computing the cost of privacy has time complexity that is polynomial in the parameters of the problem. Moreover, we establish that the cost of privacy increases with the strength of differential privacy protections, and we quantify this increase. Finally, numerical experiments on two example environments validate the established relationship between the cost of privacy and the strength of data privacy protections.
Gursoy, M. Emre, Rajasekar, Vivekanand, Liu, Ling.  2020.  Utility-Optimized Synthesis of Differentially Private Location Traces. 2020 Second IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications (TPS-ISA). :30—39.
Differentially private location trace synthesis (DPLTS) has recently emerged as a solution to protect mobile users' privacy while enabling the analysis and sharing of their location traces. A key challenge in DPLTS is to best preserve the utility in location trace datasets, which is non-trivial considering the high dimensionality, complexity and heterogeneity of datasets, as well as the diverse types and notions of utility. In this paper, we present OptaTrace: a utility-optimized and targeted approach to DPLTS. Given a real trace dataset D, the differential privacy parameter ε controlling the strength of privacy protection, and the utility/error metric Err of interest; OptaTrace uses Bayesian optimization to optimize DPLTS such that the output error (measured in terms of given metric Err) is minimized while ε-differential privacy is satisfied. In addition, OptaTrace introduces a utility module that contains several built-in error metrics for utility benchmarking and for choosing Err, as well as a front-end web interface for accessible and interactive DPLTS service. Experiments show that OptaTrace's optimized output can yield substantial utility improvement and error reduction compared to previous work.
Wang, Lei, Manchester, Ian R., Trumpf, Jochen, Shi, Guodong.  2020.  Initial-Value Privacy of Linear Dynamical Systems. 2020 59th IEEE Conference on Decision and Control (CDC). :3108—3113.
This paper studies initial-value privacy problems of linear dynamical systems. We consider a standard linear time-invariant system with random process and measurement noises. For such a system, eavesdroppers having access to system output trajectories may infer the system initial states, leading to initial-value privacy risks. When a finite number of output trajectories are eavesdropped, we consider a requirement that any guess about the initial values can be plausibly denied. When an infinite number of output trajectories are eavesdropped, we consider a requirement that the initial values should not be uniquely recoverable. In view of these two privacy requirements, we define differential initial-value privacy and intrinsic initial-value privacy, respectively, for the system as metrics of privacy risks. First of all, we prove that the intrinsic initial-value privacy is equivalent to unobservability, while the differential initial-value privacy can be achieved for a privacy budget depending on an extended observability matrix of the system and the covariance of the noises. Next, the inherent network nature of the considered linear system is explored, where each individual state corresponds to a node and the state and output matrices induce interaction and sensing graphs, leading to a network system. Under this network system perspective, we allow the initial states at some nodes to be public, and investigate the resulting intrinsic initial- value privacy of each individual node. We establish necessary and sufficient conditions for such individual node initial-value privacy, and also prove that the intrinsic initial-value privacy of individual nodes is generically determined by the network structure.