Visible to the public Biblio

Filters: Author is Hale, Matthew  [Clear All Filters]
2021-06-02
Yazdani, Kasra, Hale, Matthew.  2020.  Error Bounds and Guidelines for Privacy Calibration in Differentially Private Kalman Filtering. 2020 American Control Conference (ACC). :4423—4428.
Differential privacy has emerged as a formal framework for protecting sensitive information in control systems. One key feature is that it is immune to post-processing, which means that arbitrary post-hoc computations can be performed on privatized data without weakening differential privacy. It is therefore common to filter private data streams. To characterize this setup, in this paper we present error and entropy bounds for Kalman filtering differentially private state trajectories. We consider systems in which an output trajectory is privatized in order to protect the state trajectory that produced it. We provide bounds on a priori and a posteriori error and differential entropy of a Kalman filter which is processing the privatized output trajectories. Using the error bounds we develop, we then provide guidelines to calibrate privacy levels in order to keep filter error within pre-specified bounds. Simulation results are presented to demonstrate these developments.
Gohari, Parham, Hale, Matthew, Topcu, Ufuk.  2020.  Privacy-Preserving Policy Synthesis in Markov Decision Processes. 2020 59th IEEE Conference on Decision and Control (CDC). :6266—6271.
In decision-making problems, the actions of an agent may reveal sensitive information that drives its decisions. For instance, a corporation's investment decisions may reveal its sensitive knowledge about market dynamics. To prevent this type of information leakage, we introduce a policy synthesis algorithm that protects the privacy of the transition probabilities in a Markov decision process. We use differential privacy as the mathematical definition of privacy. The algorithm first perturbs the transition probabilities using a mechanism that provides differential privacy. Then, based on the privatized transition probabilities, we synthesize a policy using dynamic programming. Our main contribution is to bound the "cost of privacy," i.e., the difference between the expected total rewards with privacy and the expected total rewards without privacy. We also show that computing the cost of privacy has time complexity that is polynomial in the parameters of the problem. Moreover, we establish that the cost of privacy increases with the strength of differential privacy protections, and we quantify this increase. Finally, numerical experiments on two example environments validate the established relationship between the cost of privacy and the strength of data privacy protections.
2020-09-28
Hale, Matthew, Jones, Austin, Leahy, Kevin.  2018.  Privacy in Feedback: The Differentially Private LQG. 2018 Annual American Control Conference (ACC). :3386–3391.
Information communicated within cyber-physical systems (CPSs) is often used in determining the physical states of such systems, and malicious adversaries may intercept these communications in order to infer future states of a CPS or its components. Accordingly, there arises a need to protect the state values of a system. Recently, the notion of differential privacy has been used to protect state trajectories in dynamical systems, and it is this notion of privacy that we use here to protect the state trajectories of CPSs. We incorporate a cloud computer to coordinate the agents comprising the CPSs of interest, and the cloud offers the ability to remotely coordinate many agents, rapidly perform computations, and broadcast the results, making it a natural fit for systems with many interacting agents or components. Striving for broad applicability, we solve infinite-horizon linear-quadratic-regulator (LQR) problems, and each agent protects its own state trajectory by adding noise to its states before they are sent to the cloud. The cloud then uses these state values to generate optimal inputs for the agents. As a result, private data are fed into feedback loops at each iteration, and each noisy term affects every future state of every agent. In this paper, we show that the differentially private LQR problem can be related to the well-studied linear-quadratic-Gaussian (LQG) problem, and we provide bounds on how agents' privacy requirements affect the cloud's ability to generate optimal feedback control values for the agents. These results are illustrated in numerical simulations.
2020-09-21
Pedram, Ali Reza, Tanaka, Takashi, Hale, Matthew.  2019.  Bidirectional Information Flow and the Roles of Privacy Masks in Cloud-Based Control. 2019 IEEE Information Theory Workshop (ITW). :1–5.
We consider a cloud-based control architecture for a linear plant with Gaussian process noise, where the state of the plant contains a client's sensitive information. We assume that the cloud tries to estimate the state while executing a designated control algorithm. The mutual information between the client's actual state and the cloud's estimate is adopted as a measure of privacy loss. We discuss the necessity of uplink and downlink privacy masks. After observing that privacy is not necessarily a monotone function of the noise levels of privacy masks, we discuss the joint design procedure for uplink and downlink privacy masks. Finally, the trade-off between privacy and control performance is explored.