Visible to the public Biblio

Found 189 results

Filters: Keyword is Data models  [Clear All Filters]
2020-04-20
Lim, Yeon-sup, Srivatsa, Mudhakar, Chakraborty, Supriyo, Taylor, Ian.  2018.  Learning Light-Weight Edge-Deployable Privacy Models. 2018 IEEE International Conference on Big Data (Big Data). :1290–1295.
Privacy becomes one of the important issues in data-driven applications. The advent of non-PC devices such as Internet-of-Things (IoT) devices for data-driven applications leads to needs for light-weight data anonymization. In this paper, we develop an anonymization framework that expedites model learning in parallel and generates deployable models for devices with low computing capability. We evaluate our framework with various settings such as different data schema and characteristics. Our results exhibit that our framework learns anonymization models up to 16 times faster than a sequential anonymization approach and that it preserves enough information in anonymized data for data-driven applications.
Lim, Yeon-sup, Srivatsa, Mudhakar, Chakraborty, Supriyo, Taylor, Ian.  2018.  Learning Light-Weight Edge-Deployable Privacy Models. 2018 IEEE International Conference on Big Data (Big Data). :1290–1295.
Privacy becomes one of the important issues in data-driven applications. The advent of non-PC devices such as Internet-of-Things (IoT) devices for data-driven applications leads to needs for light-weight data anonymization. In this paper, we develop an anonymization framework that expedites model learning in parallel and generates deployable models for devices with low computing capability. We evaluate our framework with various settings such as different data schema and characteristics. Our results exhibit that our framework learns anonymization models up to 16 times faster than a sequential anonymization approach and that it preserves enough information in anonymized data for data-driven applications.
2020-04-17
Alim, Adil, Zhao, Xujiang, Cho, Jin-Hee, Chen, Feng.  2019.  Uncertainty-Aware Opinion Inference Under Adversarial Attacks. 2019 IEEE International Conference on Big Data (Big Data). :6—15.

Inference of unknown opinions with uncertain, adversarial (e.g., incorrect or conflicting) evidence in large datasets is not a trivial task. Without proper handling, it can easily mislead decision making in data mining tasks. In this work, we propose a highly scalable opinion inference probabilistic model, namely Adversarial Collective Opinion Inference (Adv-COI), which provides a solution to infer unknown opinions with high scalability and robustness under the presence of uncertain, adversarial evidence by enhancing Collective Subjective Logic (CSL) which is developed by combining SL and Probabilistic Soft Logic (PSL). The key idea behind the Adv-COI is to learn a model of robust ways against uncertain, adversarial evidence which is formulated as a min-max problem. We validate the out-performance of the Adv-COI compared to baseline models and its competitive counterparts under possible adversarial attacks on the logic-rule based structured data and white and black box adversarial attacks under both clean and perturbed semi-synthetic and real-world datasets in three real world applications. The results show that the Adv-COI generates the lowest mean absolute error in the expected truth probability while producing the lowest running time among all.

2020-04-06
Frahat, Rzan Tarig, Monowar, Muhammed Mostafa, Buhari, Seyed M.  2019.  Secure and Scalable Trust Management Model for IoT P2P Network. 2019 2nd International Conference on Computer Applications Information Security (ICCAIS). :1–6.
IoT trust management is a security solution that assures the trust between different IoT entities before establishing any relationship with other anonymous devices. Recent researches presented in the literature tend to use a Blockchain-based trust management model for IoT besides the fog node approach in order to address the constraints of IoT resources. Actually, Blockchain has solved many drawbacks of centralized models. However, it is still not preferable for dealing with massive data produced by IoT because of its drawbacks such as delay, network overhead, and scalability issues. Therefore, in this paper we define some factors that should be considered when designing scalable models, and we propose a fully distributed trust management model for IoT that provide a large-scale trust model and address the limitations of Blockchain. We design our model based on a new approach called Holochain considering some security issues, such as detecting misbehaviors, data integrity and availability.
2020-04-03
Song, Liwei, Shokri, Reza, Mittal, Prateek.  2019.  Membership Inference Attacks Against Adversarially Robust Deep Learning Models. 2019 IEEE Security and Privacy Workshops (SPW). :50—56.
In recent years, the research community has increasingly focused on understanding the security and privacy challenges posed by deep learning models. However, the security domain and the privacy domain have typically been considered separately. It is thus unclear whether the defense methods in one domain will have any unexpected impact on the other domain. In this paper, we take a step towards enhancing our understanding of deep learning models when the two domains are combined together. We do this by measuring the success of membership inference attacks against two state-of-the-art adversarial defense methods that mitigate evasion attacks: adversarial training and provable defense. On the one hand, membership inference attacks aim to infer an individual's participation in the target model's training dataset and are known to be correlated with target model's overfitting. On the other hand, adversarial defense methods aim to enhance the robustness of target models by ensuring that model predictions are unchanged for a small area around each sample in the training dataset. Intuitively, adversarial defenses may rely more on the training dataset and be more vulnerable to membership inference attacks. By performing empirical membership inference attacks on both adversarially robust models and corresponding undefended models, we find that the adversarial training method is indeed more susceptible to membership inference attacks, and the privacy leakage is directly correlated with model robustness. We also find that the provable defense approach does not lead to enhanced success of membership inference attacks. However, this is achieved by significantly sacrificing the accuracy of the model on benign data points, indicating that privacy, security, and prediction accuracy are not jointly achieved in these two approaches.
Zhao, Hui, Li, Zhihui, Wei, Hansheng, Shi, Jianqi, Huang, Yanhong.  2019.  SeqFuzzer: An Industrial Protocol Fuzzing Framework from a Deep Learning Perspective. 2019 12th IEEE Conference on Software Testing, Validation and Verification (ICST). :59—67.

Industrial networks are the cornerstone of modern industrial control systems. Performing security checks of industrial communication processes helps detect unknown risks and vulnerabilities. Fuzz testing is a widely used method for performing security checks that takes advantage of automation. However, there is a big challenge to carry out security checks on industrial network due to the increasing variety and complexity of industrial communication protocols. In this case, existing approaches usually take a long time to model the protocol for generating test cases, which is labor-intensive and time-consuming. This becomes even worse when the target protocol is stateful. To help in addressing this problem, we employed a deep learning model to learn the structures of protocol frames and deal with the temporal features of stateful protocols. We propose a fuzzing framework named SeqFuzzer which automatically learns the protocol frame structures from communication traffic and generates fake but plausible messages as test cases. For proving the usability of our approach, we applied SeqFuzzer to widely-used Ethernet for Control Automation Technology (EtherCAT) devices and successfully detected several security vulnerabilities.

Gerl, Armin, Becher, Stefan.  2019.  Policy-Based De-Identification Test Framework. 2019 IEEE World Congress on Services (SERVICES). 2642-939X:356—357.
Protecting privacy of individuals is a basic right, which has to be considered in our data-centered society in which new technologies emerge rapidly. To preserve the privacy of individuals de-identifying technologies have been developed including pseudonymization, personal privacy anonymization, and privacy models. Each having several variations with different properties and contexts which poses the challenge for the proper selection and application of de-identification methods. We tackle this challenge proposing a policy-based de-identification test framework for a systematic approach to experimenting and evaluation of various combinations of methods and their interplay. Evaluation of the experimental results regarding performance and utility is considered within the framework. We propose a domain-specific language, expressing the required complex configuration options, including data-set, policy generator, and various de-identification methods.
Kantarcioglu, Murat, Shaon, Fahad.  2019.  Securing Big Data in the Age of AI. 2019 First IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications (TPS-ISA). :218—220.

Increasingly organizations are collecting ever larger amounts of data to build complex data analytics, machine learning and AI models. Furthermore, the data needed for building such models may be unstructured (e.g., text, image, and video). Hence such data may be stored in different data management systems ranging from relational databases to newer NoSQL databases tailored for storing unstructured data. Furthermore, data scientists are increasingly using programming languages such as Python, R etc. to process data using many existing libraries. In some cases, the developed code will be automatically executed by the NoSQL system on the stored data. These developments indicate the need for a data security and privacy solution that can uniformly protect data stored in many different data management systems and enforce security policies even if sensitive data is processed using a data scientist submitted complex program. In this paper, we introduce our vision for building such a solution for protecting big data. Specifically, our proposed system system allows organizations to 1) enforce policies that control access to sensitive data, 2) keep necessary audit logs automatically for data governance and regulatory compliance, 3) sanitize and redact sensitive data on-the-fly based on the data sensitivity and AI model needs, 4) detect potentially unauthorized or anomalous access to sensitive data, 5) automatically create attribute-based access control policies based on data sensitivity and data type.

2020-03-30
Scherzinger, Stefanie, Seifert, Christin, Wiese, Lena.  2019.  The Best of Both Worlds: Challenges in Linking Provenance and Explainability in Distributed Machine Learning. 2019 IEEE 39th International Conference on Distributed Computing Systems (ICDCS). :1620–1629.
Machine learning experts prefer to think of their input as a single, homogeneous, and consistent data set. However, when analyzing large volumes of data, the entire data set may not be manageable on a single server, but must be stored on a distributed file system instead. Moreover, with the pressing demand to deliver explainable models, the experts may no longer focus on the machine learning algorithms in isolation, but must take into account the distributed nature of the data stored, as well as the impact of any data pre-processing steps upstream in their data analysis pipeline. In this paper, we make the point that even basic transformations during data preparation can impact the model learned, and that this is exacerbated in a distributed setting. We then sketch our vision of end-to-end explainability of the model learned, taking the pre-processing into account. In particular, we point out the potentials of linking the contributions of research on data provenance with the efforts on explainability in machine learning. In doing so, we highlight pitfalls we may experience in a distributed system on the way to generating more holistic explanations for our machine learning models.
Miao, Hui, Deshpande, Amol.  2019.  Understanding Data Science Lifecycle Provenance via Graph Segmentation and Summarization. 2019 IEEE 35th International Conference on Data Engineering (ICDE). :1710–1713.
Increasingly modern data science platforms today have non-intrusive and extensible provenance ingestion mechanisms to collect rich provenance and context information, handle modifications to the same file using distinguishable versions, and use graph data models (e.g., property graphs) and query languages (e.g., Cypher) to represent and manipulate the stored provenance/context information. Due to the schema-later nature of the metadata, multiple versions of the same files, and unfamiliar artifacts introduced by team members, the resulting "provenance graphs" are quite verbose and evolving; further, it is very difficult for the users to compose queries and utilize this valuable information just using standard graph query model. In this paper, we propose two high-level graph query operators to address the verboseness and evolving nature of such provenance graphs. First, we introduce a graph segmentation operator, which queries the retrospective provenance between a set of source vertices and a set of destination vertices via flexible boundary criteria to help users get insight about the derivation relationships among those vertices. We show the semantics of such a query in terms of a context-free grammar, and develop efficient algorithms that run orders of magnitude faster than state-of-the-art. Second, we propose a graph summarization operator that combines similar segments together to query prospective provenance of the underlying project. The operator allows tuning the summary by ignoring vertex details and characterizing local structures, and ensures the provenance meaning using path constraints. We show the optimal summary problem is PSPACE-complete and develop effective approximation algorithms. We implement the operators on top of Neo4j, evaluate our query techniques extensively, and show the effectiveness and efficiency of the proposed methods.
Narendra, Nanjangud C., Shukla, Anshu, Nayak, Sambit, Jagadish, Asha, Kalkur, Rachana.  2019.  Genoma: Distributed Provenance as a Service for IoT-based Systems. 2019 IEEE 5th World Forum on Internet of Things (WF-IoT). :755–760.
One of the key aspects of IoT-based systems, which we believe has not been getting the attention it deserves, is provenance. Provenance refers to those actions that record the usage of data in the system, along with the rationale for said usage. Historically, most provenance methods in distributed systems have been tightly coupled with those of the underlying data processing frameworks in such systems. However, in this paper, we argue that IoT provenance requires a different treatment, given the heterogeneity and dynamism of IoT-based systems. In particular, provenance in IoT-based systems should be decoupled as far as possible from the underlying data processing substrates in IoT-based systems.To that end, in this paper, we present Genoma, our ongoing work on a system for provenance-as-a-service in IoT-based systems. By "provenance-as-a-service" we mean the following: distributed provenance across IoT devices, edge and cloud; and agnostic of the underlying data processing substrate. Genoma comprises a set of services that act together to provide useful provenance information to users across the system. We also show how we are realizing Genoma via an implementation prototype built on Apache Atlas and Tinkergraph, through which we are investigating several key research issues in distributed IoT provenance.
2020-03-18
Promyslov, Vitaly, Jharko, Elena, Semenkov, Kirill.  2019.  Principles of Physical and Information Model Integration for Cybersecurity Provision to a Nuclear Power Plant. 2019 Twelfth International Conference "Management of large-scale system development" (MLSD). :1–3.
For complex technical objects the research of cybersecurity problems should take into account both physical and information properties of the object. The paper considers a hybrid model that unifies information and physical models and may be used as a tool for countering cyber threats and for cybersecurity risk assessment at the design and operational stage of an object's lifecycle.
2020-03-16
Singh, Rina, Graves, Jeffrey A., Anantharaj, Valentine, Sukumar, Sreenivas R..  2019.  Evaluating Scientific Workflow Engines for Data and Compute Intensive Discoveries. 2019 IEEE International Conference on Big Data (Big Data). :4553–4560.
Workflow engines used to script scientific experiments involving numerical simulation, data analysis, instruments, edge sensors, and artificial intelligence have to deal with the complexities of hardware, software, resource availability, and the collaborative nature of science. In this paper, we survey workflow engines used in data-intensive and compute-intensive discovery pipelines from scientific disciplines such as astronomy, high energy physics, earth system science, bio-medicine, and material science and present a qualitative analysis of their respective capabilities. We compare 5 popular workflow engines and their differentiated approach to job orchestration, job launching, data management and provenance, security authentication, ease-ofuse, workflow description, and scripting semantics. The comparisons presented in this paper allow practitioners to choose the appropriate engine for their scientific experiment and lead to recommendations for future work.
2020-03-12
Salmani, Hassan, Hoque, Tamzidul, Bhunia, Swarup, Yasin, Muhammad, Rajendran, Jeyavijayan JV, Karimi, Naghmeh.  2019.  Special Session: Countering IP Security Threats in Supply Chain. 2019 IEEE 37th VLSI Test Symposium (VTS). :1–9.

The continuing decrease in feature size of integrated circuits, and the increase of the complexity and cost of design and fabrication has led to outsourcing the design and fabrication of integrated circuits to third parties across the globe, and in turn has introduced several security vulnerabilities. The adversaries in the supply chain can pirate integrated circuits, overproduce these circuits, perform reverse engineering, and/or insert hardware Trojans in these circuits. Developing countermeasures against such security threats is highly crucial. Accordingly, this paper first develops a learning-based trust verification framework to detect hardware Trojans. To tackle Trojan insertion, IP piracy and overproduction, logic locking schemes and in particular stripped functionality logic locking is discussed and its resiliency against the state-of-the-art attacks is investigated.

Lafram, Ichrak, Berbiche, Naoual, El Alami, Jamila.  2019.  Artificial Neural Networks Optimized with Unsupervised Clustering for IDS Classification. 2019 1st International Conference on Smart Systems and Data Science (ICSSD). :1–7.

Information systems are becoming more and more complex and closely linked. These systems are encountering an enormous amount of nefarious traffic while ensuring real - time connectivity. Therefore, a defense method needs to be in place. One of the commonly used tools for network security is intrusion detection systems (IDS). An IDS tries to identify fraudulent activity using predetermined signatures or pre-established user misbehavior while monitoring incoming traffic. Intrusion detection systems based on signature and behavior cannot detect new attacks and fall when small behavior deviations occur. Many researchers have proposed various approaches to intrusion detection using machine learning techniques as a new and promising tool to remedy this problem. In this paper, the authors present a combination of two machine learning methods, unsupervised clustering followed by a supervised classification framework as a Fast, highly scalable and precise packets classification system. This model's performance is assessed on the new proposed dataset by the Canadian Institute for Cyber security and the University of New Brunswick (CICIDS2017). The overall process was fast, showing high accuracy classification results.

2020-03-09
Sion, Laurens, Van Landuyt, Dimitri, Wuyts, Kim, Joosen, Wouter.  2019.  Privacy Risk Assessment for Data Subject-Aware Threat Modeling. 2019 IEEE Security and Privacy Workshops (SPW). :64–71.
Regulatory efforts such as the General Data Protection Regulation (GDPR) embody a notion of privacy risk that is centered around the fundamental rights of data subjects. This is, however, a fundamentally different notion of privacy risk than the one commonly used in threat modeling which is largely agnostic of involved data subjects. This mismatch hampers the applicability of privacy threat modeling approaches such as LINDDUN in a Data Protection by Design (DPbD) context. In this paper, we present a data subject-aware privacy risk assessment model in specific support of privacy threat modeling activities. This model allows the threat modeler to draw upon a more holistic understanding of privacy risk while assessing the relevance of specific privacy threats to the system under design. Additionally, we propose a number of improvements to privacy threat modeling, such as enriching Data Flow Diagram (DFD) system models with appropriate risk inputs (e.g., information on data types and involved data subjects). Incorporation of these risk inputs in DFDs, in combination with a risk estimation approach using Monte Carlo simulations, leads to a more comprehensive assessment of privacy risk. The proposed risk model has been integrated in threat modeling tool prototype and validated in the context of a realistic eHealth application.
2020-02-26
Sabbagh, Majid, Gongye, Cheng, Fei, Yunsi, Wang, Yanzhi.  2019.  Evaluating Fault Resiliency of Compressed Deep Neural Networks. 2019 IEEE International Conference on Embedded Software and Systems (ICESS). :1–7.

Model compression is considered to be an effective way to reduce the implementation cost of deep neural networks (DNNs) while maintaining the inference accuracy. Many recent studies have developed efficient model compression algorithms and implementations in accelerators on various devices. Protecting integrity of DNN inference against fault attacks is important for diverse deep learning enabled applications. However, there has been little research investigating the fault resilience of DNNs and the impact of model compression on fault tolerance. In this work, we consider faults on different data types and develop a simulation framework for understanding the fault resiliency of compressed DNN models as compared to uncompressed models. We perform our experiments on two common DNNs, LeNet-5 and VGG16, and evaluate their fault resiliency with different types of compression. The results show that binary quantization can effectively increase the fault resilience of DNN models by 10000x for both LeNet5 and VGG16. Finally, we propose software and hardware mitigation techniques to increase the fault resiliency of DNN models.

2020-02-24
Song, Juncai, Zhao, Jiwen, Dong, Fei, Zhao, Jing, Xu, Liang, Wang, Lijun, Xie, Fang.  2019.  Demagnetization Modeling Research for Permanent Magnet in PMSLM Using Extreme Learning Machine. 2019 IEEE International Electric Machines Drives Conference (IEMDC). :1757–1761.
This paper investigates the temperature demagnetization modeling method for permanent magnets (PM) in permanent magnet synchronous linear motor (PMSLM). First, the PM characteristics are presented, and finite element analysis (FEA) is conducted to show the magnetic distribution under different temperatures. Second, demagnetization degrees and remanence of the five PMs' experiment sample are actually measured in stove at temperatures varying from room temperature to 300 °C, and to obtain the real data for next-step modeling. Third, machine learning algorithm called extreme learning machine (ELM) is introduced to map the nonlinear relationships between temperature and demagnetization characteristics of PM and build the demagnetization models. Finally, comparison experiments between linear modeling method, polynomial modeling method, and ELM can certify the effectiveness and advancement of this proposed method.
2020-02-18
Talluri, Sacheendra, Iosup, Alexandru.  2019.  Efficient Estimation of Read Density When Caching for Big Data Processing. IEEE INFOCOM 2019 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS). :502–507.

Big data processing systems are becoming increasingly more present in cloud workloads. Consequently, they are starting to incorporate more sophisticated mechanisms from traditional database and distributed systems. We focus in this work on the use of caching policies, which for big data raise important new challenges. Not only they must respond to new variants of the trade-off between hit rate, response time, and the space consumed by the cache, but they must do so at possibly higher volume and velocity than web and database workloads. Previous caching policies have not been tested experimentally with big data workloads. We address these challenges in this work. We propose the Read Density family of policies, which is a principled approach to quantify the utility of cached objects through a family of utility functions that depend on the frequency of reads of an object. We further design the Approximate Histogram, which is a policy-based technique based on an array of counters. This technique promises to achieve runtime-space efficient computation of the metric required by the cache policy. We evaluate through trace-based simulation the caching policies from the Read Density family, and compare them with over ten state-of-the-art alternatives. We use two workload traces representative for big data processing, collected from commercial Spark and MapReduce deployments. While we achieve comparable performance to the state-of-art with less parameters, meaningful performance improvement for big data workloads remain elusive.

Huang, Yonghong, Verma, Utkarsh, Fralick, Celeste, Infantec-Lopez, Gabriel, Kumar, Brajesh, Woodward, Carl.  2019.  Malware Evasion Attack and Defense. 2019 49th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W). :34–38.

Machine learning (ML) classifiers are vulnerable to adversarial examples. An adversarial example is an input sample which is slightly modified to induce misclassification in an ML classifier. In this work, we investigate white-box and grey-box evasion attacks to an ML-based malware detector and conduct performance evaluations in a real-world setting. We compare the defense approaches in mitigating the attacks. We propose a framework for deploying grey-box and black-box attacks to malware detection systems.

Nasr, Milad, Shokri, Reza, Houmansadr, Amir.  2019.  Comprehensive Privacy Analysis of Deep Learning: Passive and Active White-Box Inference Attacks against Centralized and Federated Learning. 2019 IEEE Symposium on Security and Privacy (SP). :739–753.

Deep neural networks are susceptible to various inference attacks as they remember information about their training data. We design white-box inference attacks to perform a comprehensive privacy analysis of deep learning models. We measure the privacy leakage through parameters of fully trained models as well as the parameter updates of models during training. We design inference algorithms for both centralized and federated learning, with respect to passive and active inference attackers, and assuming different adversary prior knowledge. We evaluate our novel white-box membership inference attacks against deep learning algorithms to trace their training data records. We show that a straightforward extension of the known black-box attacks to the white-box setting (through analyzing the outputs of activation functions) is ineffective. We therefore design new algorithms tailored to the white-box setting by exploiting the privacy vulnerabilities of the stochastic gradient descent algorithm, which is the algorithm used to train deep neural networks. We investigate the reasons why deep learning models may leak information about their training data. We then show that even well-generalized models are significantly susceptible to white-box membership inference attacks, by analyzing state-of-the-art pre-trained and publicly available models for the CIFAR dataset. We also show how adversarial participants, in the federated learning setting, can successfully run active membership inference attacks against other participants, even when the global model achieves high prediction accuracies.

Chen, Jiefeng, Wu, Xi, Rastogi, Vaibhav, Liang, Yingyu, Jha, Somesh.  2019.  Towards Understanding Limitations of Pixel Discretization Against Adversarial Attacks. 2019 IEEE European Symposium on Security and Privacy (EuroS P). :480–495.
Wide adoption of artificial neural networks in various domains has led to an increasing interest in defending adversarial attacks against them. Preprocessing defense methods such as pixel discretization are particularly attractive in practice due to their simplicity, low computational overhead, and applicability to various systems. It is observed that such methods work well on simple datasets like MNIST, but break on more complicated ones like ImageNet under recently proposed strong white-box attacks. To understand the conditions for success and potentials for improvement, we study the pixel discretization defense method, including more sophisticated variants that take into account the properties of the dataset being discretized. Our results again show poor resistance against the strong attacks. We analyze our results in a theoretical framework and offer strong evidence that pixel discretization is unlikely to work on all but the simplest of the datasets. Furthermore, our arguments present insights why some other preprocessing defenses may be insecure.
2020-02-17
Murudkar, Chetana V., Gitlin, Richard D..  2019.  QoE-Driven Anomaly Detection in Self-Organizing Mobile Networks Using Machine Learning. 2019 Wireless Telecommunications Symposium (WTS). :1–5.
Current procedures for anomaly detection in self-organizing mobile communication networks use network-centric approaches to identify dysfunctional serving nodes. In this paper, a user-centric approach and a novel methodology for anomaly detection is proposed, where the Quality of Experience (QoE) metric is used to evaluate the end-user experience. The system model demonstrates how dysfunctional serving eNodeBs are successfully detected by implementing a parametric QoE model using machine learning for prediction of user QoE in a network scenario created by the ns-3 network simulator. This approach can play a vital role in the future ultra-dense and green mobile communication networks that are expected to be both self- organizing and self-healing.
Chowdhury, Mohammad Jabed Morshed, Colman, Alan, Kabir, Muhammad Ashad, Han, Jun, Sarda, Paul.  2019.  Continuous Authorization in Subject-Driven Data Sharing Using Wearable Devices. 2019 18th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/13th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE). :327–333.
Sharing personal data with other people or organizations over the web has become a common phenomena of our modern life. This type of sharing is usually managed by access control mechanisms that include access control model and policies. However, these models are designed from the organizational perspective and do not provide sufficient flexibility and control to the individuals. Therefore, individuals often cannot control sharing of their personal data based on their personal context. In addition, the existing context-aware access control models usually check contextual condition once at the beginning of the access and do not evaluate the context during an on-going access. Moreover, individuals do not have control to define how often they want to evaluate the context condition for an ongoing access. Wearable devices such as Fitbit and Apple Smart Watch have recently become increasingly popular. This has made it possible to gather an individual's real-time contextual information (e.g., location, blood-pressure etc.) which can be used to enforce continuous authorization to the individual's data resources. In this paper, we introduce a novel data sharing policy model for continuous authorization in subject-driven data sharing. A software prototype has been implemented employing a wearable device to demonstrate continuous authorization. Our continuous authorization framework provides more control to the individuals by enabling revocation of on-going access to shared data if the specified context condition becomes invalid.
Ying, Huan, Ouyang, Xuan, Miao, Siwei, Cheng, Yushi.  2019.  Power Message Generation in Smart Grid via Generative Adversarial Network. 2019 IEEE 3rd Information Technology, Networking, Electronic and Automation Control Conference (ITNEC). :790–793.
As the next generation of the power system, smart grid develops towards automated and intellectualized. Along with the benefits brought by smart grids, e.g., improved energy conversion rate, power utilization rate, and power supply quality, are the security challenges. One of the most important issues in smart grids is to ensure reliable communication between the secondary equipment. The state-of-art method to ensure smart grid security is to detect cyber attacks by deep learning. However, due to the small number of negative samples, the performance of the detection system is limited. In this paper, we propose a novel approach that utilizes the Generative Adversarial Network (GAN) to generate abundant negative samples, which helps to improve the performance of the state-of-art detection system. The evaluation results demonstrate that the proposed method can effectively improve the performance of the detection system by 4%.