Visible to the public Biblio

Filters: Author is Wu, Yi  [Clear All Filters]
Zhang, Wei, Zhang, ZhiShuo, Wu, Yi.  2020.  Multi-Authority Attribute Based Encryption With Policy-hidden and Accountability. 2020 International Conference on Space-Air-Ground Computing (SAGC). :95—96.
In this paper, an attribute-based encryption scheme with policy hidden and key tracing under multi-authority is proposed. In our scheme, the access structure is embedded into the ciphertext implicitly and the attacker cannot gain user's private information by access structure. The key traceability is realized under multi-authority and collusion is prevented. Finally, based on the DBDH security model, it is proved that this scheme can resist the plaintext attack under the standard model.
Lu, Xiao, Jing, Jiangping, Wu, Yi.  2020.  False Data Injection Attack Location Detection Based on Classification Method in Smart Grid. 2020 2nd International Conference on Artificial Intelligence and Advanced Manufacture (AIAM). :133—136.
The state estimation technology is utilized to estimate the grid state based on the data of the meter and grid topology structure. The false data injection attack (FDIA) is an information attack method to disturb the security of the power system based on the meter measurement. Current FDIA detection researches pay attention on detecting its presence. The location information of FDIA is also important for power system security. In this paper, locating the FDIA of the meter is regarded as a multi-label classification problem. Each label represents the state of the corresponding meter. The ensemble model, the multi-label decision tree algorithm, is utilized as the classifier to detect the exact location of the FDIA. This method does not need the information of the power topology and statistical knowledge assumption. The numerical experiments based on the IEEE-14 bus system validates the performance of the proposed method.
Wu, Yi, Liu, Jian, Chen, Yingying, Cheng, Jerry.  2019.  Semi-black-box Attacks Against Speech Recognition Systems Using Adversarial Samples. 2019 IEEE International Symposium on Dynamic Spectrum Access Networks (DySPAN). :1—5.
As automatic speech recognition (ASR) systems have been integrated into a diverse set of devices around us in recent years, security vulnerabilities of them have become an increasing concern for the public. Existing studies have demonstrated that deep neural networks (DNNs), acting as the computation core of ASR systems, is vulnerable to deliberately designed adversarial attacks. Based on the gradient descent algorithm, existing studies have successfully generated adversarial samples which can disturb ASR systems and produce adversary-expected transcript texts designed by adversaries. Most of these research simulated white-box attacks which require knowledge of all the components in the targeted ASR systems. In this work, we propose the first semi-black-box attack against the ASR system - Kaldi. Requiring only partial information from Kaldi and none from DNN, we can embed malicious commands into a single audio chip based on the gradient-independent genetic algorithm. The crafted audio clip could be recognized as the embedded malicious commands by Kaldi and unnoticeable to humans in the meanwhile. Experiments show that our attack can achieve high attack success rate with unnoticeable perturbations to three types of audio clips (pop music, pure music, and human command) without the need of the underlying DNN model parameters and architecture.