Visible to the public Biblio

Found 126 results

Filters: Keyword is Data models  [Clear All Filters]
Fahrbach, M., Miller, G. L., Peng, R., Sawlani, S., Wang, J., Xu, S. C..  2018.  Graph Sketching against Adaptive Adversaries Applied to the Minimum Degree Algorithm. 2018 IEEE 59th Annual Symposium on Foundations of Computer Science (FOCS). :101–112.
Motivated by the study of matrix elimination orderings in combinatorial scientific computing, we utilize graph sketching and local sampling to give a data structure that provides access to approximate fill degrees of a matrix undergoing elimination in polylogarithmic time per elimination and query. We then study the problem of using this data structure in the minimum degree algorithm, which is a widely-used heuristic for producing elimination orderings for sparse matrices by repeatedly eliminating the vertex with (approximate) minimum fill degree. This leads to a nearly-linear time algorithm for generating approximate greedy minimum degree orderings. Despite extensive studies of algorithms for elimination orderings in combinatorial scientific computing, our result is the first rigorous incorporation of randomized tools in this setting, as well as the first nearly-linear time algorithm for producing elimination orderings with provable approximation guarantees. While our sketching data structure readily works in the oblivious adversary model, by repeatedly querying and greedily updating itself, it enters the adaptive adversarial model where the underlying sketches become prone to failure due to dependency issues with their internal randomness. We show how to use an additional sampling procedure to circumvent this problem and to create an independent access sequence. Our technique for decorrelating interleaved queries and updates to this randomized data structure may be of independent interest.
Versluis, L., Neacsu, M., Iosup, A..  2018.  A Trace-Based Performance Study of Autoscaling Workloads of Workflows in Datacenters. 2018 18th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGRID). :223–232.
To improve customer experience, datacenter operators offer support for simplifying application and resource management. For example, running workloads of workflows on behalf of customers is desirable, but requires increasingly more sophisticated autoscaling policies, that is, policies that dynamically provision resources for the customer. Although selecting and tuning autoscaling policies is a challenging task for datacenter operators, so far relatively few studies investigate the performance of autoscaling for workloads of workflows. Complementing previous knowledge, in this work we propose the first comprehensive performance study in the field. Using trace-based simulation, we compare state-of-the-art autoscaling policies across multiple application domains, workload arrival patterns (e.g., burstiness), and system utilization levels. We further investigate the interplay between autoscaling and regular allocation policies, and the complexity cost of autoscaling. Our quantitative study focuses not only on traditional performance metrics and on state-of-the-art elasticity metrics, but also on time-and memory-related autoscaling-complexity metrics. Our main results give strong and quantitative evidence about previously unreported operational behavior, for example, that autoscaling policies perform differently across application domains and allocation and provisioning policies should be co-designed.
Serey, J., Ternero, R., Soto, I., Quezada, L..  2017.  A Competency Model to Help Selecting the Information Security Method for Platforms of Communication by Visible Light (VLC). 2017 First South American Colloquium on Visible Light Communications (SACVLC). :1–6.
It is challenging in Security information and Platforms of Communication by Visible Light (VLC), solutions are made to manage the right Security problems. Several solutions have been developed and evolved constantly to meet complex and ever-changing business needs in the world. In the business context, people who are responsible for a project or an organization undergo professional and emotional stress. This research project has developed a new model which can help decision makers evaluating these alternative methods in relation to articulating different types of Security problems, formulating Security criteria, and simulating expectations of adopting the chosen method for Platforms of Communication by Visible Light (VLC).
Teoh, T. T., Nguwi, Y. Y., Elovici, Y., Cheung, N. M., Ng, W. L..  2017.  Analyst Intuition Based Hidden Markov Model on High Speed, Temporal Cyber Security Big Data. 2017 13th International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery (ICNC-FSKD). :2080–2083.
Hidden Markov Models (HMM) are probabilistic models that can be used for forecasting time series data. It has seen success in various domains like finance [1-5], bioinformatics [6-8], healthcare [9-11], agriculture [12-14], artificial intelligence[15-17]. However, the use of HMM in cyber security found to date is numbered. We believe the properties of HMM being predictive, probabilistic, and its ability to model different naturally occurring states form a good basis to model cyber security data. It is hence the motivation of this work to provide the initial results of our attempts to predict security attacks using HMM. A large network datasets representing cyber security attacks have been used in this work to establish an expert system. The characteristics of attacker's IP addresses can be extracted from our integrated datasets to generate statistical data. The cyber security expert provides the weight of each attribute and forms a scoring system by annotating the log history. We applied HMM to distinguish between a cyber security attack, unsure and no attack by first breaking the data into 3 cluster using Fuzzy K mean (FKM), then manually label a small data (Analyst Intuition) and finally use HMM state-based approach. By doing so, our results are very encouraging as compare to finding anomaly in a cyber security log, which generally results in creating huge amount of false detection.
Adams, S., Carter, B., Fleming, C., Beling, P. A..  2018.  Selecting System Specific Cybersecurity Attack Patterns Using Topic Modeling. 2018 17th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/ 12th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE). :490–497.
One challenge for cybersecurity experts is deciding which type of attack would be successful against the system they wish to protect. Often, this challenge is addressed in an ad hoc fashion and is highly dependent upon the skill and knowledge base of the expert. In this study, we present a method for automatically ranking attack patterns in the Common Attack Pattern Enumeration and Classification (CAPEC) database for a given system. This ranking method is intended to produce suggested attacks to be evaluated by a cybersecurity expert and not a definitive ranking of the "best" attacks. The proposed method uses topic modeling to extract hidden topics from the textual description of each attack pattern and learn the parameters of a topic model. The posterior distribution of topics for the system is estimated using the model and any provided text. Attack patterns are ranked by measuring the distance between each attack topic distribution and the topic distribution of the system using KL divergence.
Dang, T. D., Hoang, D..  2017.  A data protection model for fog computing. 2017 Second International Conference on Fog and Mobile Edge Computing (FMEC). :32–38.

Cloud computing has established itself as an alternative IT infrastructure and service model. However, as with all logically centralized resource and service provisioning infrastructures, cloud does not handle well local issues involving a large number of networked elements (IoTs) and it is not responsive enough for many applications that require immediate attention of a local controller. Fog computing preserves many benefits of cloud computing and it is also in a good position to address these local and performance issues because its resources and specific services are virtualized and located at the edge of the customer premise. However, data security is a critical challenge in fog computing especially when fog nodes and their data move frequently in its environment. This paper addresses the data protection and the performance issues by 1) proposing a Region-Based Trust-Aware (RBTA) model for trust translation among fog nodes of regions, 2) introducing a Fog-based Privacy-aware Role Based Access Control (FPRBAC) for access control at fog nodes, and 3) developing a mobility management service to handle changes of users and fog devices' locations. The implementation results demonstrate the feasibility and the efficiency of our proposed framework.

Chen, X., Shang, T., Kim, I., Liu, J..  2017.  A Remote Data Integrity Checking Scheme for Big Data Storage. 2017 IEEE Second International Conference on Data Science in Cyberspace (DSC). :53–59.
In the existing remote data integrity checking schemes, dynamic update operates on block level, which usually restricts the location of the data inserted in a file due to the fixed size of a data block. In this paper, we propose a remote data integrity checking scheme with fine-grained update for big data storage. The proposed scheme achieves basic operations of insertion, modification, deletion on line level at any location in a file by designing a mapping relationship between line level update and block level update. Scheme analysis shows that the proposed scheme supports public verification and privacy preservation. Meanwhile, it performs data integrity checking with low computation and communication cost.
Li, W., Song, T., Li, Y., Ma, L., Yu, J., Cheng, X..  2017.  A Hierarchical Game Framework for Data Privacy Preservation in Context-Aware IoT Applications. 2017 IEEE Symposium on Privacy-Aware Computing (PAC). :176–177.

Due to the increasing concerns of securing private information, context-aware Internet of Things (IoT) applications are in dire need of supporting data privacy preservation for users. In the past years, game theory has been widely applied to design secure and privacy-preserving protocols for users to counter various attacks, and most of the existing work is based on a two-player game model, i.e., a user/defender-attacker game. In this paper, we consider a more practical scenario which involves three players: a user, an attacker, and a service provider, and such a complicated system renders any two-player model inapplicable. To capture the complex interactions between the service provider, the user, and the attacker, we propose a hierarchical two-layer three-player game framework. Finally, we carry out a comprehensive numerical study to validate our proposed game framework and theoretical analysis.

Jinan, S., Kefeng, P., Xuefeng, C., Junfu, Z..  2017.  Security Patterns from Intelligent Data: A Map of Software Vulnerability Analysis. 2017 ieee 3rd international conference on big data security on cloud (bigdatasecurity), ieee international conference on high performance and smart computing (hpsc), and ieee international conference on intelligent data and security (ids). :18–25.

A significant milestone is reached when the field of software vulnerability research matures to a point warranting related security patterns represented by intelligent data. A substantial research material of empirical findings, distinctive taxonomy, theoretical models, and a set of novel or adapted detection methods justify a unifying research map. The growth interest in software vulnerability is evident from a large number of works done during the last several decades. This article briefly reviews research works in vulnerability enumeration, taxonomy, models and detection methods from the perspective of intelligent data processing and analysis. This article also draws the map which associated with specific characteristics and challenges of vulnerability research, such as vulnerability patterns representation and problem-solving strategies.

Zhe, D., Qinghong, W., Naizheng, S., Yuhan, Z..  2017.  Study on Data Security Policy Based on Cloud Storage. 2017 ieee 3rd international conference on big data security on cloud (bigdatasecurity), ieee international conference on high performance and smart computing (hpsc), and ieee international conference on intelligent data and security (ids). :145–149.

Along with the growing popularisation of Cloud Computing. Cloud storage technology has been paid more and more attention as an emerging network storage technology which is extended and developed by cloud computing concepts. Cloud computing environment depends on user services such as high-speed storage and retrieval provided by cloud computing system. Meanwhile, data security is an important problem to solve urgently for cloud storage technology. In recent years, There are more and more malicious attacks on cloud storage systems, and cloud storage system of data leaking also frequently occurred. Cloud storage security concerns the user's data security. The purpose of this paper is to achieve data security of cloud storage and to formulate corresponding cloud storage security policy. Those were combined with the results of existing academic research by analyzing the security risks of user data in cloud storage and approach a subject of the relevant security technology, which based on the structural characteristics of cloud storage system.

Lee, J., Kim, Y. S., Kim, J. H., Kim, I. K..  2017.  Toward the SIEM architecture for cloud-based security services. 2017 IEEE Conference on Communications and Network Security (CNS). :398–399.

Cloud Computing represents one of the most significant shifts in information technology and it enables to provide cloud-based security service such as Security-as-a-service (SECaaS). Improving of the cloud computing technologies, the traditional SIEM paradigm is able to shift to cloud-based security services. In this paper, we propose the SIEM architecture that can be deployed to the SECaaS platform which we have been developing for analyzing and recognizing intelligent cyber-threat based on virtualization technologies.

Yue, L., Junqin, H., Shengzhi, Q., Ruijin, W..  2017.  Big Data Model of Security Sharing Based on Blockchain. 2017 3rd International Conference on Big Data Computing and Communications (BIGCOM). :117–121.

The rise of big data age in the Internet has led to the explosive growth of data size. However, trust issue has become the biggest problem of big data, leading to the difficulty in data safe circulation and industry development. The blockchain technology provides a new solution to this problem by combining non-tampering, traceable features with smart contracts that automatically execute default instructions. In this paper, we present a credible big data sharing model based on blockchain technology and smart contract to ensure the safe circulation of data resources.

Uwagbole, S. O., Buchanan, W. J., Fan, L..  2017.  An applied pattern-driven corpus to predictive analytics in mitigating SQL injection attack. 2017 Seventh International Conference on Emerging Security Technologies (EST). :12–17.

Emerging computing relies heavily on secure backend storage for the massive size of big data originating from the Internet of Things (IoT) smart devices to the Cloud-hosted web applications. Structured Query Language (SQL) Injection Attack (SQLIA) remains an intruder's exploit of choice to pilfer confidential data from the back-end database with damaging ramifications. The existing approaches were all before the new emerging computing in the context of the Internet big data mining and as such will lack the ability to cope with new signatures concealed in a large volume of web requests over time. Also, these existing approaches were strings lookup approaches aimed at on-premise application domain boundary, not applicable to roaming Cloud-hosted services' edge Software-Defined Network (SDN) to application endpoints with large web request hits. Using a Machine Learning (ML) approach provides scalable big data mining for SQLIA detection and prevention. Unfortunately, the absence of corpus to train a classifier is an issue well known in SQLIA research in applying Artificial Intelligence (AI) techniques. This paper presents an application context pattern-driven corpus to train a supervised learning model. The model is trained with ML algorithms of Two-Class Support Vector Machine (TC SVM) and Two-Class Logistic Regression (TC LR) implemented on Microsoft Azure Machine Learning (MAML) studio to mitigate SQLIA. This scheme presented here, then forms the subject of the empirical evaluation in Receiver Operating Characteristic (ROC) curve.

Aygun, R. C., Yavuz, A. G..  2017.  Network Anomaly Detection with Stochastically Improved Autoencoder Based Models. 2017 IEEE 4th International Conference on Cyber Security and Cloud Computing (CSCloud). :193–198.

Intrusion detection systems do not perform well when it comes to detecting zero-day attacks, therefore improving their performance in that regard is an active research topic. In this study, to detect zero-day attacks with high accuracy, we proposed two deep learning based anomaly detection models using autoencoder and denoising autoencoder respectively. The key factor that directly affects the accuracy of the proposed models is the threshold value which was determined using a stochastic approach rather than the approaches available in the current literature. The proposed models were tested using the KDDTest+ dataset contained in NSL-KDD, and we achieved an accuracy of 88.28% and 88.65% respectively. The obtained results show that, as a singular model, our proposed anomaly detection models outperform any other singular anomaly detection methods and they perform almost the same as the newly suggested hybrid anomaly detection models.

Ahmadon, M. A. B., Yamaguchi, S., Saon, S., Mahamad, A. K..  2017.  On service security analysis for event log of IoT system based on data Petri net. 2017 IEEE International Symposium on Consumer Electronics (ISCE). :4–8.

The Internet of Things (IoT) has bridged our physical world to the cyber world which allows us to achieve our desired lifestyle. However, service security is an essential part to ensure that the designed service is not compromised. In this paper, we proposed a security analysis for IoT services. We focus on the context of detecting malicious operation from an event log of the designed IoT services. We utilized Petri nets with data to model IoT service which is logically correct. Then, we check the trace from an event log by tracking the captured process and data. Finally, we illustrated the approach with a smart home service and showed the effectiveness of our approach.

Matt, J., Waibel, P., Schulte, S..  2017.  Cost- and Latency-Efficient Redundant Data Storage in the Cloud. 2017 IEEE 10th Conference on Service-Oriented Computing and Applications (SOCA). :164–172.
With the steady increase of offered cloud storage services, they became a popular alternative to local storage systems. Beside several benefits, the usage of cloud storage services can offer, they have also some downsides like potential vendor lock-in or unavailability. Different pricing models, storage technologies and changing storage requirements are further complicating the selection of the best fitting storage solution. In this work, we present a heuristic optimization approach that optimizes the placement of data on cloud-based storage services in a redundant, cost- and latency-efficient way while considering user-defined Quality of Service requirements. The presented approach uses monitored data access patterns to find the best fitting storage solution. Through extensive evaluations, we show that our approach saves up to 30% of the storage cost and reduces the upload and download times by up to 48% and 69% in comparison to a baseline that follows a state-of-the-art approach.
An, S., Zhao, Z., Zhou, H..  2017.  Research on an Agent-Based Intelligent Social Tagging Recommendation System. 2017 9th International Conference on Intelligent Human-Machine Systems and Cybernetics (IHMSC). 1:43–46.

With the repaid growth of social tagging users, it becomes very important for social tagging systems how the required resources are recommended to users rapidly and accurately. Firstly, the architecture of an agent-based intelligent social tagging system is constructed using agent technology. Secondly, the design and implementation of user interest mining, personalized recommendation and common preference group recommendation are presented. Finally, a self-adaptive recommendation strategy for social tagging and its implementation are proposed based on the analysis to the shortcoming of the personalized recommendation strategy and the common preference group recommendation strategy. The self-adaptive recommendation strategy achieves equilibrium selection between efficiency and accuracy, so that it solves the contradiction between efficiency and accuracy in the personalized recommendation model and the common preference recommendation model.

Price-Williams, M., Heard, N., Turcotte, M..  2017.  Detecting Periodic Subsequences in Cyber Security Data. 2017 European Intelligence and Security Informatics Conference (EISIC). :84–90.

Anomaly detection for cyber-security defence hasgarnered much attention in recent years providing an orthogonalapproach to traditional signature-based detection systems.Anomaly detection relies on building probability models ofnormal computer network behaviour and detecting deviationsfrom the model. Most data sets used for cyber-security havea mix of user-driven events and automated network events,which most often appears as polling behaviour. Separating theseautomated events from those caused by human activity is essentialto building good statistical models for anomaly detection. This articlepresents a changepoint detection framework for identifyingautomated network events appearing as periodic subsequences ofevent times. The opening event of each subsequence is interpretedas a human action which then generates an automated, periodicprocess. Difficulties arising from the presence of duplicate andmissing data are addressed. The methodology is demonstrated usingauthentication data from Los Alamos National Laboratory'senterprise computer network.

Joshaghani, R., Mehrpouyan, H..  2017.  A Model-Checking Approach for Enforcing Purpose-Based Privacy Policies. 2017 IEEE Symposium on Privacy-Aware Computing (PAC). :178–179.

With the growth of Internet in many different aspects of life, users are required to share private information more than ever. Hence, users need a privacy management tool that can enforce complex and customized privacy policies. In this paper, we propose a privacy management system that not only allows users to define complex privacy policies for data sharing actions, but also monitors users' behavior and relationships to generate realistic policies. In addition, the proposed system utilizes formal modeling and model-checking approach to prove that information disclosures are valid and privacy policies are consistent with one another.

Wu, T. Y., Tseng, Y. M., Huang, S. S., Lai, Y. C..  2017.  Non-Repudiable Provable Data Possession Scheme With Designated Verifier in Cloud Storage Systems. IEEE Access. 5:19333–19341.

In cloud storage systems, users can upload their data along with associated tags (authentication information) to cloud storage servers. To ensure the availability and integrity of the outsourced data, provable data possession (PDP) schemes convince verifiers (users or third parties) that the outsourced data stored in the cloud storage server is correct and unchanged. Recently, several PDP schemes with designated verifier (DV-PDP) were proposed to provide the flexibility of arbitrary designated verifier. A designated verifier (private verifier) is trustable and designated by a user to check the integrity of the outsourced data. However, these DV-PDP schemes are either inefficient or insecure under some circumstances. In this paper, we propose the first non-repudiable PDP scheme with designated verifier (DV-NRPDP) to address the non-repudiation issue and resolve possible disputations between users and cloud storage servers. We define the system model, framework and adversary model of DV-NRPDP schemes. Afterward, a concrete DV-NRPDP scheme is presented. Based on the computing discrete logarithm assumption, we formally prove that the proposed DV-NRPDP scheme is secure against several forgery attacks in the random oracle model. Comparisons with the previously proposed schemes are given to demonstrate the advantages of our scheme.

Dering, M. L., Tucker, C. S..  2017.  Generative Adversarial Networks for Increasing the Veracity of Big Data. 2017 IEEE International Conference on Big Data (Big Data). :2595–2602.

This work describes how automated data generation integrates in a big data pipeline. A lack of veracity in big data can cause models that are inaccurate, or biased by trends in the training data. This can lead to issues as a pipeline matures that are difficult to overcome. This work describes the use of a Generative Adversarial Network to generate sketch data, such as those that might be used in a human verification task. These generated sketches are verified as recognizable using a crowd-sourcing methodology, and finds that the generated sketches were correctly recognized 43.8% of the time, in contrast to human drawn sketches which were 87.7% accurate. This method is scalable and can be used to generate realistic data in many domains and bootstrap a dataset used for training a model prior to deployment.

Gebhardt, D., Parikh, K., Dzieciuch, I., Walton, M., Hoang, N. A. V..  2017.  Hunting for Naval Mines with Deep Neural Networks. OCEANS 2017 - Anchorage. :1–5.

Explosive naval mines pose a threat to ocean and sea faring vessels, both military and civilian. This work applies deep neural network (DNN) methods to the problem of detecting minelike objects (MLO) on the seafloor in side-scan sonar imagery. We explored how the DNN depth, memory requirements, calculation requirements, and training data distribution affect detection efficacy. A visualization technique (class activation map) was incorporated that aids a user in interpreting the model's behavior. We found that modest DNN model sizes yielded better accuracy (98%) than very simple DNN models (93%) and a support vector machine (78%). The largest DNN models achieved textless;1% efficacy increase at a cost of a 17x increase of trainable parameter count and computation requirements. In contrast to DNNs popularized for many-class image recognition tasks, the models for this task require far fewer computational resources (0.3% of parameters), and are suitable for embedded use within an autonomous unmanned underwater vehicle.

Hill, Z., Nichols, W. M., Papa, M., Hale, J. C., Hawrylak, P. J..  2017.  Verifying Attack Graphs through Simulation. 2017 Resilience Week (RWS). :64–67.

Verifying attacks against cyber physical systems can be a costly and time-consuming process. By using a simulated environment, attacks can be verified quickly and accurately. By combining the simulation of a cyber physical system with a hybrid attack graph, the effects of a series of exploits can be accurately analysed. Furthermore, the use of a simulated environment to verify attacks may uncover new information about the nature of the attacks.

Chen, Y., Chen, W..  2017.  Finger ECG-Based Authentication for Healthcare Data Security Using Artificial Neural Network. 2017 IEEE 19th International Conference on E-Health Networking, Applications and Services (Healthcom). :1–6.

Wearable and mobile medical devices provide efficient, comfortable, and economic health monitoring, having a wide range of applications from daily to clinical scenarios. Health data security becomes a critically important issue. Electrocardiogram (ECG) has proven to be a potential biometric in human recognition over the past decade. Unlike conventional authentication methods using passwords, fingerprints, face, etc., ECG signal can not be simply intercepted, duplicated, and enables continuous identification. However, in many of the studies, algorithms developed are not suitable for practical application, which usually require long ECG data for authentication. In this work, we introduce a two-phase authentication using artificial neural network (NN) models. This algorithm enables fast authentication within only 3 seconds, meanwhile achieves reasonable performance in recognition. We test the proposed method in a controlled laboratory experiment with 50 subjects. Finger ECG signals are collected using a mobile device at different times and physical statues. At the first stage, a ``General'' NN model is constructed based on data from the cohort and used for preliminary screening, while at the second stage ``Personal'' NN models constructed from single individual's data are applied as fine-grained identification. The algorithm is tested on the whole data set, and on different sizes of subsets (5, 10, 20, 30, and 40). Results proved that the proposed method is feasible and reliable for individual authentication, having obtained average False Acceptance Rate (FAR) and False Rejection Rate (FRR) below 10% for the whole data set.

Pandey, M., Pandey, R., Chopra, U. K..  2017.  Rendering Trustability to Semantic Web Applications-Manchester Approach. 2017 International Conference on Infocom Technologies and Unmanned Systems (Trends and Future Directions) (ICTUS). :255–259.

The Semantic Web today is a web that allows for intelligent knowledge retrieval by means of semantically annotated tags. This web also known as Intelligent web aims to provide meaningful information to man and machines equally. However, the information thus provided lacks the component of trust. Therefore we propose a method to embed trust in semantic web documents by the concept of provenance which provides answers to who, when, where and by whom the documents were created or modified. This paper demonstrates the same using the Manchester approach of provenance implemented in a University Ontology.