Visible to the public Biblio

Filters: Keyword is perturbation  [Clear All Filters]
2021-07-08
Chaturvedi, Amit Kumar, Chahar, Meetendra Singh, Sharma, Kalpana.  2020.  Proposing Innovative Perturbation Algorithm for Securing Portable Data on Cloud Servers. 2020 9th International Conference System Modeling and Advancement in Research Trends (SMART). :360—364.
Cloud computing provides an open architecture and resource sharing computing platform with pay-per-use model. It is now a popular computing platform and most of the new internet based computing services are on this innovation supported environment. We consider it as innovation supported because developers are more focused here on the service design, rather on arranging the infrastructure, network, management of the resources, etc. These all things are available in cloud computing on hired basis. Now, a big question arises here is the security of data or privacy of data because the service provider is already using the infrastructure, network, storage, processors, and other more resources from the third party. So, the security or privacy of the portable user's data is the main motivation for writing this research paper. In this paper, we are proposing an innovative perturbation algorithm MAP() to secure the portable user's data on the cloud server.
2021-02-22
Fang, S., Kennedy, S., Wang, C., Wang, B., Pei, Q., Liu, X..  2020.  Sparser: Secure Nearest Neighbor Search with Space-filling Curves. IEEE INFOCOM 2020 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS). :370–375.
Nearest neighbor search, a classic way of identifying similar data, can be applied to various areas, including database, machine learning, natural language processing, software engineering, etc. Secure nearest neighbor search aims to find nearest neighbors to a given query point over encrypted data without accessing data in plaintext. It provides privacy protection to datasets when nearest neighbor queries need to be operated by an untrusted party (e.g., a public server). While different solutions have been proposed to support nearest neighbor queries on encrypted data, these existing solutions still encounter critical drawbacks either in efficiency or privacy. In light of the limitations in the current literature, we propose a novel approximate nearest neighbor search solution, referred to as Sparser, by leveraging a combination of space-filling curves, perturbation, and Order-Preserving Encryption. The advantages of Sparser are twofold, strengthening privacy and improving efficiency. Specifically, Sparser pre-processes plaintext data with space-filling curves and perturbation, such that data is sparse, which mitigates leakage abuse attacks and renders stronger privacy. In addition to privacy enhancement, Sparser can efficiently find approximate nearest neighbors over encrypted data with logarithmic time. Through extensive experiments over real-world datasets, we demonstrate that Sparser can achieve strong privacy protection under leakage abuse attacks and minimize search time.
2020-04-20
Wang, Chong Xiao, Song, Yang, Tay, Wee Peng.  2018.  PRESERVING PARAMETER PRIVACY IN SENSOR NETWORKS. 2018 IEEE Global Conference on Signal and Information Processing (GlobalSIP). :1316–1320.
We consider the problem of preserving the privacy of a set of private parameters while allowing inference of a set of public parameters based on observations from sensors in a network. We assume that the public and private parameters are correlated with the sensor observations via a linear model. We define the utility loss and privacy gain functions based on the Cramér-Rao lower bounds for estimating the public and private parameters, respectively. Our goal is to minimize the utility loss while ensuring that the privacy gain is no less than a predefined privacy gain threshold, by allowing each sensor to perturb its own observation before sending it to the fusion center. We propose methods to determine the amount of noise each sensor needs to add to its observation under the cases where prior information is available or unavailable.
Wang, Chong Xiao, Song, Yang, Tay, Wee Peng.  2018.  PRESERVING PARAMETER PRIVACY IN SENSOR NETWORKS. 2018 IEEE Global Conference on Signal and Information Processing (GlobalSIP). :1316–1320.
We consider the problem of preserving the privacy of a set of private parameters while allowing inference of a set of public parameters based on observations from sensors in a network. We assume that the public and private parameters are correlated with the sensor observations via a linear model. We define the utility loss and privacy gain functions based on the Cramér-Rao lower bounds for estimating the public and private parameters, respectively. Our goal is to minimize the utility loss while ensuring that the privacy gain is no less than a predefined privacy gain threshold, by allowing each sensor to perturb its own observation before sending it to the fusion center. We propose methods to determine the amount of noise each sensor needs to add to its observation under the cases where prior information is available or unavailable.
2019-01-31
Nakamura, T., Nishi, H..  2018.  TMk-Anonymity: Perturbation-Based Data Anonymization Method for Improving Effectiveness of Secondary Use. IECON 2018 - 44th Annual Conference of the IEEE Industrial Electronics Society. :3138–3143.

The recent emergence of smartphones, cloud computing, and the Internet of Things has brought about the explosion of data creation. By collating and merging these enormous data with other information, services that use information become more sophisticated and advanced. However, at the same time, the consideration of privacy violations caused by such merging is indispensable. Various anonymization methods have been proposed to preserve privacy. The conventional perturbation-based anonymization method of location data adds comparatively larger noise, and the larger noise makes it difficult to utilize the data effectively for secondary use. In this research, to solve these problems, we first clarified the definition of privacy preservation and then propose TMk-anonymity according to the definition.

2018-07-06
Zhang, F., Chan, P. P. K., Tang, T. Q..  2015.  L-GEM based robust learning against poisoning attack. 2015 International Conference on Wavelet Analysis and Pattern Recognition (ICWAPR). :175–178.

Poisoning attack in which an adversary misleads the learning process by manipulating its training set significantly affect the performance of classifiers in security applications. This paper proposed a robust learning method which reduces the influences of attack samples on learning. The sensitivity, defined as the fluctuation of the output with small perturbation of the input, in Localized Generalization Error Model (L-GEM) is measured for each training sample. The classifier's output on attack samples may be sensitive and inaccurate since these samples are different from other untainted samples. An import score is assigned to each sample according to its localized generalization error bound. The classifier is trained using a new training set obtained by resampling the samples according to their importance scores. RBFNN is applied as the classifier in experimental evaluation. The proposed model outperforms than the traditional one under the well-known label flip poisoning attacks including nearest-first and farthest-first flips attack.