Visible to the public Biblio

Found 145 results

Filters: Keyword is Differential privacy  [Clear All Filters]
2023-01-20
Wu, Fazong, Wang, Xin, Yang, Ming, Zhang, Heng, Wu, Xiaoming, Yu, Jia.  2022.  Stealthy Attack Detection for Privacy-preserving Real-time Pricing in Smart Grids. 2022 13th Asian Control Conference (ASCC). :2012—2017.

Over the past decade, smart grids have been widely implemented. Real-time pricing can better address demand-side management in smart grids. Real-time pricing requires managers to interact more with consumers at the data level, which raises many privacy threats. Thus, we introduce differential privacy into the Real-time pricing for privacy protection. However, differential privacy leaves more space for an adversary to compromise the robustness of the system, which has not been well addressed in the literature. In this paper, we propose a novel active attack detection scheme against stealthy attacks, and then give the proof of correctness and effectiveness of the proposed scheme. Further, we conduct extensive experiments with real datasets from CER to verify the detection performance of the proposed scheme.

2023-01-06
Anastasakis, Zacharias, Psychogyios, Konstantinos, Velivassaki, Terpsi, Bourou, Stavroula, Voulkidis, Artemis, Skias, Dimitrios, Gonos, Antonis, Zahariadis, Theodore.  2022.  Enhancing Cyber Security in IoT Systems using FL-based IDS with Differential Privacy. 2022 Global Information Infrastructure and Networking Symposium (GIIS). :30—34.
Nowadays, IoT networks and devices exist in our everyday life, capturing and carrying unlimited data. However, increasing penetration of connected systems and devices implies rising threats for cybersecurity with IoT systems suffering from network attacks. Artificial Intelligence (AI) and Machine Learning take advantage of huge volumes of IoT network logs to enhance their cybersecurity in IoT. However, these data are often desired to remain private. Federated Learning (FL) provides a potential solution which enables collaborative training of attack detection model among a set of federated nodes, while preserving privacy as data remain local and are never disclosed or processed on central servers. While FL is resilient and resolves, up to a point, data governance and ownership issues, it does not guarantee security and privacy by design. Adversaries could interfere with the communication process, expose network vulnerabilities, and manipulate the training process, thus affecting the performance of the trained model. In this paper, we present a federated learning model which can successfully detect network attacks in IoT systems. Moreover, we evaluate its performance under various settings of differential privacy as a privacy preserving technique and configurations of the participating nodes. We prove that the proposed model protects the privacy without actually compromising performance. Our model realizes a limited performance impact of only ∼ 7% less testing accuracy compared to the baseline while simultaneously guaranteeing security and applicability.
Yang, Xuefeng, Liu, Li, Zhang, Yinggang, Li, Yihao, Liu, Pan, Ai, Shili.  2022.  A Privacy-preserving Approach to Distributed Set-membership Estimation over Wireless Sensor Networks. 2022 9th International Conference on Dependable Systems and Their Applications (DSA). :974—979.
This paper focuses on the system on wireless sensor networks. The system is linear and the time of the system is discrete as well as variable, which named discrete-time linear time-varying systems (DLTVS). DLTVS are vulnerable to network attacks when exchanging information between sensors in the network, as well as putting their security at risk. A DLTVS with privacy-preserving is designed for this purpose. A set-membership estimator is designed by adding privacy noise obeying the Laplace distribution to state at the initial moment. Simultaneously, the differential privacy of the system is analyzed. On this basis, the real state of the system and the existence form of the estimator for the desired distribution are analyzed. Finally, simulation examples are given, which prove that the model after adding differential privacy can obtain accurate estimates and ensure the security of the system state.
Golatkar, Aditya, Achille, Alessandro, Wang, Yu-Xiang, Roth, Aaron, Kearns, Michael, Soatto, Stefano.  2022.  Mixed Differential Privacy in Computer Vision. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). :8366—8376.
We introduce AdaMix, an adaptive differentially private algorithm for training deep neural network classifiers using both private and public image data. While pre-training language models on large public datasets has enabled strong differential privacy (DP) guarantees with minor loss of accuracy, a similar practice yields punishing trade-offs in vision tasks. A few-shot or even zero-shot learning baseline that ignores private data can outperform fine-tuning on a large private dataset. AdaMix incorporates few-shot training, or cross-modal zero-shot learning, on public data prior to private fine-tuning, to improve the trade-off. AdaMix reduces the error increase from the non-private upper bound from the 167–311% of the baseline, on average across 6 datasets, to 68-92% depending on the desired privacy level selected by the user. AdaMix tackles the trade-off arising in visual classification, whereby the most privacy sensitive data, corresponding to isolated points in representation space, are also critical for high classification accuracy. In addition, AdaMix comes with strong theoretical privacy guarantees and convergence analysis.
2022-09-20
Abuah, Chike, Silence, Alex, Darais, David, Near, Joseph P..  2021.  DDUO: General-Purpose Dynamic Analysis for Differential Privacy. 2021 IEEE 34th Computer Security Foundations Symposium (CSF). :1—15.
Differential privacy enables general statistical analysis of data with formal guarantees of privacy protection at the individual level. Tools that assist data analysts with utilizing differential privacy have frequently taken the form of programming languages and libraries. However, many existing programming languages designed for compositional verification of differential privacy impose significant burden on the programmer (in the form of complex type annotations). Supplementary library support for privacy analysis built on top of existing general-purpose languages has been more usable, but incapable of pervasive end-to-end enforcement of sensitivity analysis and privacy composition. We introduce DDuo, a dynamic analysis for enforcing differential privacy. DDuo is usable by non-experts: its analysis is automatic and it requires no additional type annotations. DDuo can be implemented as a library for existing programming languages; we present a reference implementation in Python which features moderate runtime overheads on realistic workloads. We include support for several data types, distance metrics and operations which are commonly used in modern machine learning programs. We also provide initial support for tracking the sensitivity of data transformations in popular Python libraries for data analysis. We formalize the novel core of the DDuo system and prove it sound for sensitivity analysis via a logical relation for metric preservation. We also illustrate DDuo's usability and flexibility through various case studies which implement state-of-the-art machine learning algorithms.
2022-08-26
Chen, Bo, Hawkins, Calvin, Yazdani, Kasra, Hale, Matthew.  2021.  Edge Differential Privacy for Algebraic Connectivity of Graphs. 2021 60th IEEE Conference on Decision and Control (CDC). :2764—2769.
Graphs are the dominant formalism for modeling multi-agent systems. The algebraic connectivity of a graph is particularly important because it provides the convergence rates of consensus algorithms that underlie many multi-agent control and optimization techniques. However, sharing the value of algebraic connectivity can inadvertently reveal sensitive information about the topology of a graph, such as connections in social networks. Therefore, in this work we present a method to release a graph’s algebraic connectivity under a graph-theoretic form of differential privacy, called edge differential privacy. Edge differential privacy obfuscates differences among graphs’ edge sets and thus conceals the absence or presence of sensitive connections therein. We provide privacy with bounded Laplace noise, which improves accuracy relative to conventional unbounded noise. The private algebraic connectivity values are analytically shown to provide accurate estimates of consensus convergence rates, as well as accurate bounds on the diameter of a graph and the mean distance between its nodes. Simulation results confirm the utility of private algebraic connectivity in these contexts.
Liu, Tianyu, Di, Boya, Wang, Shupeng, Song, Lingyang.  2021.  A Privacy-Preserving Incentive Mechanism for Federated Cloud-Edge Learning. 2021 IEEE Global Communications Conference (GLOBECOM). :1—6.
The federated learning scheme enhances the privacy preservation through avoiding the private data uploading in cloud-edge computing. However, the attacks against the uploaded model updates still cause private data leakage which demotivates the privacy-sensitive participating edge devices. Facing this issue, we aim to design a privacy-preserving incentive mechanism for the federated cloud-edge learning (PFCEL) system such that 1) the edge devices are motivated to actively contribute to the updated model uploading, 2) a trade-off between the private data leakage and the model accuracy is achieved. We formulate the incentive design problem as a three-layer Stackelberg game, where the server-device interaction is further formulated as a contract design problem. Extensive numerical evaluations demonstrate the effectiveness of our designed mechanism in terms of privacy preservation and system utility.
Zuo, Zhiqiang, Tian, Ran, Wang, Yijing.  2021.  Bipartite Consensus for Multi-Agent Systems with Differential Privacy Constraint. 2021 40th Chinese Control Conference (CCC). :5062—5067.
This paper studies the differential privacy-preserving problem of discrete-time multi-agent systems (MASs) with antagonistic information, where the connected signed graph is structurally balanced. First, we introduce the bipartite consensus definitions in the sense of mean square and almost sure, respectively. Second, some criteria for mean square and almost sure bipartite consensus are derived, where the eventualy value is related to the gauge matrix and agents’ initial states. Third, we design the ε-differential privacy algorithm and characterize the tradeoff between differential privacy and system performance. Finally, simulations validate the effectiveness of the proposed algorithm.
Chowdhury, Sayak Ray, Zhou, Xingyu, Shroff, Ness.  2021.  Adaptive Control of Differentially Private Linear Quadratic Systems. 2021 IEEE International Symposium on Information Theory (ISIT). :485—490.
In this paper we study the problem of regret minimization in reinforcement learning (RL) under differential privacy constraints. This work is motivated by the wide range of RL applications for providing personalized service, where privacy concerns are becoming paramount. In contrast to previous works, we take the first step towards non-tabular RL settings, while providing a rigorous privacy guarantee. In particular, we consider the adaptive control of differentially private linear quadratic (LQ) systems. We develop the first private RL algorithm, Private-OFU-RL which is able to attain a sub-linear regret while guaranteeing privacy protection. More importantly, the additional cost due to privacy is only on the order of \$\textbackslashtextbackslashfrac\textbackslashtextbackslashln(1/\textbackslashtextbackslashdelta)ˆ1/4\textbackslashtextbackslashvarepsilonˆ1/2\$ given privacy parameters \$\textbackslashtextbackslashvarepsilon, \textbackslashtextbackslashdelta \textbackslashtextgreater 0\$. Through this process, we also provide a general procedure for adaptive control of LQ systems under changing regularizers, which not only generalizes previous non-private controls, but also serves as the basis for general private controls.
2022-07-29
Tao, Qian, Tong, Yongxin, Li, Shuyuan, Zeng, Yuxiang, Zhou, Zimu, Xu, Ke.  2021.  A Differentially Private Task Planning Framework for Spatial Crowdsourcing. 2021 22nd IEEE International Conference on Mobile Data Management (MDM). :9—18.
Spatial crowdsourcing has stimulated various new applications such as taxi calling and food delivery. A key enabler for these spatial crowdsourcing based applications is to plan routes for crowd workers to execute tasks given diverse requirements of workers and the spatial crowdsourcing platform. Despite extensive studies on task planning in spatial crowdsourcing, few have accounted for the location privacy of tasks, which may be misused by an untrustworthy platform. In this paper, we explore efficient task planning for workers while protecting the locations of tasks. Specifically, we define the Privacy-Preserving Task Planning (PPTP) problem, which aims at both total revenue maximization of the platform and differential privacy of task locations. We first apply the Laplacian mechanism to protect location privacy, and analyze its impact on the total revenue. Then we propose an effective and efficient task planning algorithm for the PPTP problem. Extensive experiments on both synthetic and real datasets validate the advantages of our algorithm in terms of total revenue and time cost.
Li, Xianxian, Fu, Xuemei, Yu, Feng, Shi, Zhenkui, Li, Jie, Yang, Junhao.  2021.  A Private Statistic Query Scheme for Encrypted Electronic Medical Record System. 2021 IEEE 24th International Conference on Computer Supported Cooperative Work in Design (CSCWD). :1033—1039.
In this paper, we propose a scheme that supports statistic query and authorized access control on an Encrypted Electronic Medical Records Databases(EMDB). Different from other schemes, it is based on Differential-Privacy(DP), which can protect the privacy of patients. By deploying an improved Multi-Authority Attribute-Based Encryption(MA-ABE) scheme, all authorities can distribute their search capability to clients under different authorities without additional negotiations. To our best knowledge, there are few studies on statistical queries on encrypted data. In this work, we consider that support differentially-private statistical queries. To improve search efficiency, we leverage the Bloom Filter(BF) to judge whether the keywords queried by users exists. Finally, we use experiments to verify and evaluate the feasibility of our proposed scheme.
2022-07-15
Zhang, Dayin, Chen, Xiaojun, Shi, Jinqiao, Wang, Dakui, Zeng, Shuai.  2021.  A Differential Privacy Collaborative Deep Learning Algorithm in Pervasive Edge Computing Environment. 2021 IEEE 20th International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom). :347—354.

With the development of 5G technology and intelligent terminals, the future direction of the Industrial Internet of Things (IIoT) evolution is Pervasive Edge Computing (PEC). In the pervasive edge computing environment, intelligent terminals can perform calculations and data processing. By migrating part of the original cloud computing model's calculations to intelligent terminals, the intelligent terminal can complete model training without uploading local data to a remote server. Pervasive edge computing solves the problem of data islands and is also successfully applied in scenarios such as vehicle interconnection and video surveillance. However, pervasive edge computing is facing great security problems. Suppose the remote server is honest but curious. In that case, it can still design algorithms for the intelligent terminal to execute and infer sensitive content such as their identity data and private pictures through the information returned by the intelligent terminal. In this paper, we research the problem of honest but curious remote servers infringing intelligent terminal privacy and propose a differential privacy collaborative deep learning algorithm in the pervasive edge computing environment. We use a Gaussian mechanism that meets the differential privacy guarantee to add noise on the first layer of the neural network to protect the data of the intelligent terminal and use analytical moments accountant technology to track the cumulative privacy loss. Experiments show that with the Gaussian mechanism, the training data of intelligent terminals can be protected reduction inaccuracy.

Yuan, Rui, Wang, Xinna, Xu, Jiangmin, Meng, Shunmei.  2021.  A Differential-Privacy-based hybrid collaborative recommendation method with factorization and regression. 2021 IEEE Intl Conf on Dependable, Autonomic and Secure Computing, Intl Conf on Pervasive Intelligence and Computing, Intl Conf on Cloud and Big Data Computing, Intl Conf on Cyber Science and Technology Congress (DASC/PiCom/CBDCom/CyberSciTech). :389—396.
Recommender systems have been proved to be effective techniques to provide users with better experiences. However, when a recommender knows the user's preference characteristics or gets their sensitive information, then a series of privacy concerns are raised. A amount of solutions in the literature have been proposed to enhance privacy protection degree of recommender systems. Although the existing solutions have enhanced the protection, they led to a decrease in recommendation accuracy simultaneously. In this paper, we propose a security-aware hybrid recommendation method by combining the factorization and regression techniques. Specifically, the differential privacy mechanism is integrated into data pre-processing for data encryption. Firstly data are perturbed to satisfy differential privacy and transported to the recommender. Then the recommender calculates the aggregated data. However, applying differential privacy raises utility issues of low recommendation accuracy, meanwhile the use of a single model may cause overfitting. In order to tackle this challenge, we adopt a fusion prediction model by combining linear regression (LR) and matrix factorization (MF) for collaborative recommendation. With the MovieLens dataset, we evaluate the recommendation accuracy and regression of our recommender system and demonstrate that our system performs better than the existing recommender system under privacy requirement.
2022-06-08
Giehl, Alexander, Heinl, Michael P., Busch, Maximilian.  2021.  Leveraging Edge Computing and Differential Privacy to Securely Enable Industrial Cloud Collaboration Along the Value Chain. 2021 IEEE 17th International Conference on Automation Science and Engineering (CASE). :2023–2028.
Big data continues to grow in the manufacturing domain due to increasing interconnectivity on the shop floor in the course of the fourth industrial revolution. The optimization of machines based on either real-time or historical machine data provides benefits to both machine producers and operators. In order to be able to make use of these opportunities, it is necessary to access the machine data, which can include sensitive information such as intellectual property. Employing the use case of machine tools, this paper presents a solution enabling industrial data sharing and cloud collaboration while protecting sensitive information. It employs the edge computing paradigm to apply differential privacy to machine data in order to protect sensitive information and simultaneously allow machine producers to perform the necessary calculations and analyses using this data.
2022-05-09
Mittal, Sonam, Jindal, Priya, Ramkumar, K. R..  2021.  Data Privacy and System Security for Banking on Clouds using Homomorphic Encryption. 2021 2nd International Conference for Emerging Technology (INCET). :1–6.
In recent times, the use of cloud computing has gained popularity all over the world in the context of performing smart computations on big data. The privacy of sensitive data of the client is of utmost important issues. Data leakage or hijackers may theft significant information about the client that ultimately may affect the reputation and prestige of its owner (bank) and client (customers). In general, to save the privacy of our banking data it is preferred to store, process, and transmit the data in the form of encrypted text. But now the main concern leads to secure computation over encrypted text or another possible way to perform computation over clouds makes data more vulnerable to hacking and attacks. Existing classical encryption techniques such as RSA, AES, and others provide secure transaction procedures for data over clouds but these are not fit for secure computation over data in the clouds. In 2009, Gentry comes with a solution for such issues and presents his idea as Homomorphic encryption (HE) that can perform computation over encrypted text without decrypting the data itself. Now a day's privacy-enhancing techniques (PET) are there to explore more potential benefits in security issues and useful in historical cases of privacy failure. Differential privacy, Federated analysis, homomorphic encryption, zero-knowledge proof, and secure multiparty computation are a privacy-enhancing technique that may useful in financial services as these techniques provide a fully-fledged mechanism for financial institutes. With the collaboration of industries, these techniques are may enable new data-sharing agreements for a more secure solution over data. In this paper, the primary concern is to investigate the different standards and properties of homomorphic encryption in digital banking and financial institutions.
2022-04-26
Yang, Ge, Wang, Shaowei, Wang, Haijie.  2021.  Federated Learning with Personalized Local Differential Privacy. 2021 IEEE 6th International Conference on Computer and Communication Systems (ICCCS). :484–489.

Recently, federated learning (FL), as an advanced and practical solution, has been applied to deal with privacy-preserving issues in distributed multi-party federated modeling. However, most existing FL methods focus on the same privacy-preserving budget while ignoring various privacy requirements of participants. In this paper, we for the first time propose an algorithm (PLU-FedOA) to optimize the deep neural network of horizontal FL with personalized local differential privacy. For such considerations, we design two approaches: PLU, which allows clients to upload local updates under differential privacy-preserving of personally selected privacy level, and FedOA, which helps the server aggregates local parameters with optimized weight in mixed privacy-preserving scenarios. Moreover, we theoretically analyze the effect on privacy and optimization of our approaches. Finally, we verify PLU-FedOA on real-world datasets.

Loya, Jatan, Bana, Tejas.  2021.  Privacy-Preserving Keystroke Analysis using Fully Homomorphic Encryption amp; Differential Privacy. 2021 International Conference on Cyberworlds (CW). :291–294.

Keystroke dynamics is a behavioural biometric form of authentication based on the inherent typing behaviour of an individual. While this technique is gaining traction, protecting the privacy of the users is of utmost importance. Fully Homomorphic Encryption is a technique that allows performing computation on encrypted data, which enables processing of sensitive data in an untrusted environment. FHE is also known to be “future-proof” since it is a lattice-based cryptosystem that is regarded as quantum-safe. It has seen significant performance improvements over the years with substantially increased developer-friendly tools. We propose a neural network for keystroke analysis trained using differential privacy to speed up training while preserving privacy and predicting on encrypted data using FHE to keep the users' privacy intact while offering sufficient usability.

Kim, Muah, Günlü, Onur, Schaefer, Rafael F..  2021.  Federated Learning with Local Differential Privacy: Trade-Offs Between Privacy, Utility, and Communication. ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). :2650–2654.

Federated learning (FL) allows to train a massive amount of data privately due to its decentralized structure. Stochastic gradient descent (SGD) is commonly used for FL due to its good empirical performance, but sensitive user information can still be inferred from weight updates shared during FL iterations. We consider Gaussian mechanisms to preserve local differential privacy (LDP) of user data in the FL model with SGD. The trade-offs between user privacy, global utility, and transmission rate are proved by defining appropriate metrics for FL with LDP. Compared to existing results, the query sensitivity used in LDP is defined as a variable, and a tighter privacy accounting method is applied. The proposed utility bound allows heterogeneous parameters over all users. Our bounds characterize how much utility decreases and transmission rate increases if a stronger privacy regime is targeted. Furthermore, given a target privacy level, our results guarantee a significantly larger utility and a smaller transmission rate as compared to existing privacy accounting methods.

Shi, Jibo, Lin, Yun, Zhang, Zherui, Yu, Shui.  2021.  A Hybrid Intrusion Detection System Based on Machine Learning under Differential Privacy Protection. 2021 IEEE 94th Vehicular Technology Conference (VTC2021-Fall). :1–6.

With the development of network, network security has become a topic of increasing concern. Recent years, machine learning technology has become an effective means of network intrusion detection. However, machine learning technology requires a large amount of data for training, and training data often contains privacy information, which brings a great risk of privacy leakage. At present, there are few researches on data privacy protection in the field of intrusion detection. Regarding the issue of privacy and security, we combine differential privacy and machine learning algorithms, including One-class Support Vector Machine (OCSVM) and Local Outlier Factor(LOF), to propose an hybrid intrusion detection system (IDS) with privacy protection. We add Laplacian noise to the original network intrusion detection data set to get differential privacy data sets with different privacy budgets, and proposed a hybrid IDS model based on machine learning to verify their utility. Experiments show that while protecting data privacy, the hybrid IDS can achieve detection accuracy comparable to traditional machine learning algorithms.

Feng, Tianyi, Zhang, Zhixiang, Wong, Wai-Choong, Sun, Sumei, Sikdar, Biplab.  2021.  A Privacy-Preserving Pedestrian Dead Reckoning Framework Based on Differential Privacy. 2021 IEEE 32nd Annual International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC). :1487–1492.

Pedestrian dead reckoning (PDR) is a widely used approach to estimate locations and trajectories. Accessing location-based services with trajectory data can bring convenience to people, but may also raise privacy concerns that need to be addressed. In this paper, a privacy-preserving pedestrian dead reckoning framework is proposed to protect a user’s trajectory privacy based on differential privacy. We introduce two metrics to quantify trajectory privacy and data utility. Our proposed privacy-preserving trajectory extraction algorithm consists of three mechanisms for the initial locations, stride lengths and directions. In addition, we design an adversary model based on particle filtering to evaluate the performance and demonstrate the effectiveness of our proposed framework with our collected sensor reading dataset.

Mehner, Luise, Voigt, Saskia Nuñez von, Tschorsch, Florian.  2021.  Towards Explaining Epsilon: A Worst-Case Study of Differential Privacy Risks. 2021 IEEE European Symposium on Security and Privacy Workshops (EuroS PW). :328–331.

Differential privacy is a concept to quantity the disclosure of private information that is controlled by the privacy parameter ε. However, an intuitive interpretation of ε is needed to explain the privacy loss to data engineers and data subjects. In this paper, we conduct a worst-case study of differential privacy risks. We generalize an existing model and reduce complexity to provide more understandable statements on the privacy loss. To this end, we analyze the impact of parameters and introduce the notion of a global privacy risk and global privacy leak.

Kühtreiber, Patrick, Reinhardt, Delphine.  2021.  Usable Differential Privacy for the Internet-of-Things. 2021 IEEE International Conference on Pervasive Computing and Communications Workshops and other Affiliated Events (PerCom Workshops). :426–427.

Current implementations of Differential Privacy (DP) focus primarily on the privacy of the data release. The planned thesis will investigate steps towards a user-centric approach of DP in the scope of the Internet-of-Things (IoT) which focuses on data subjects, IoT developers, and data analysts. We will conduct user studies to find out more about the often conflicting interests of the involved parties and the encountered challenges. Furthermore, a technical solution will be developed to assist data subjects and analysts in making better informed decisions. As a result, we expect our contributions to be a step towards the development of usable DP for IoT sensor data.

Qin, Desong, Zhang, Zhenjiang.  2021.  A Frequency Estimation Algorithm under Local Differential Privacy. 2021 15th International Conference on Ubiquitous Information Management and Communication (IMCOM). :1–5.

With the rapid development of 5G, the Internet of Things (IoT) and edge computing technologies dramatically improve smart industries' efficiency, such as healthcare, smart agriculture, and smart city. IoT is a data-driven system in which many smart devices generate and collect a massive amount of user privacy data, which may be used to improve users' efficiency. However, these data tend to leak personal privacy when people send it to the Internet. Differential privacy (DP) provides a method for measuring privacy protection and a more flexible privacy protection algorithm. In this paper, we study an estimation problem and propose a new frequency estimation algorithm named MFEA that redesigns the publish process. The algorithm maps a finite data set to an integer range through a hash function, then initializes the data vector according to the mapped value and adds noise through the randomized response. The frequency of all interference data is estimated with maximum likelihood. Compared with the current traditional frequency estimation, our approach achieves better algorithm complexity and error control while satisfying differential privacy protection (LDP).

Wang, Haoxiang, Zhang, Jiasheng, Lu, Chenbei, Wu, Chenye.  2021.  Privacy Preserving in Non-Intrusive Load Monitoring: A Differential Privacy Perspective. 2021 IEEE Power Energy Society General Meeting (PESGM). :01–01.

Smart meter devices enable a better understanding of the demand at the potential risk of private information leakage. One promising solution to mitigating such risk is to inject noises into the meter data to achieve a certain level of differential privacy. In this paper, we cast one-shot non-intrusive load monitoring (NILM) in the compressive sensing framework, and bridge the gap between theoretical accuracy of NILM inference and differential privacy's parameters. We then derive the valid theoretical bounds to offer insights on how the differential privacy parameters affect the NILM performance. Moreover, we generalize our conclusions by proposing the hierarchical framework to solve the multishot NILM problem. Numerical experiments verify our analytical results and offer better physical insights of differential privacy in various practical scenarios. This also demonstrates the significance of our work for the general privacy preserving mechanism design.

Gadepally, Krishna Chaitanya, Mangalampalli, Sameer.  2021.  Effects of Noise on Machine Learning Algorithms Using Local Differential Privacy Techniques. 2021 IEEE International IOT, Electronics and Mechatronics Conference (IEMTRONICS). :1–4.

Noise has been used as a way of protecting privacy of users in public datasets for many decades now. Differential privacy is a new standard to add noise, so that user privacy is protected. When this technique is applied for a single end user data, it's called local differential privacy. In this study, we evaluate the effects of adding noise to generate randomized responses on machine learning models. We generate randomized responses using Gaussian, Laplacian noise on singular end user data as well as correlated end user data. Finally, we provide results that we have observed on a few data sets for various machine learning use cases.