Biblio
The start-up value of an SRAM cell is unique, random, and unclonable as it is determined by the inherent process mismatch between transistors. These properties make SRAM an attractive circuit for generating encryption keys. The primary challenge for SRAM based key generation, however, is the poor stability when the circuit is subject to random noise, temperature and voltage changes, and device aging. Temporal majority voting (TMV) and bit masking were used in previous works to identify and store the location of unstable or marginally stable SRAM cells. However, TMV requires a long test time and significant hardware resources. In addition, the number of repetitive power-ups required to find the most stable cells is prohibitively high. To overcome the shortcomings of TMV, we propose a novel data remanence based technique to detect SRAM cells with the highest stability for reliable key generation. This approach requires only two remanence tests: writing `1' (or `0') to the entire array and momentarily shutting down the power until a few cells flip. We exploit the fact that the cells that are easily flipped are the most robust cells when written with the opposite data. The proposed method is more effective in finding the most stable cells in a large SRAM array than a TMV scheme with 1,000 power-up tests. Experimental studies show that the 256-bit key generated from a 512 kbit SRAM using the proposed data remanence method is 100% stable under different temperatures, power ramp up times, and device aging.
Now a day's cloud technology is a new example of computing that pays attention to more computer user, government agencies and business. Cloud technology brought more advantages particularly in every-present services where everyone can have a right to access cloud computing services by internet. With use of cloud computing, there is no requirement for physical servers or hardware that will help the computer system of company, networks and internet services. One of center services offered by cloud technology is storing the data in remote storage space. In the last few years, storage of data has been realized as important problems in information technology. In cloud computing data storage technology, there are some set of significant policy issues that includes privacy issues, anonymity, security, government surveillance, telecommunication capacity, liability, reliability and among others. Although cloud technology provides a lot of benefits, security is the significant issues between customer and cloud. Normally cloud computing technology has more customers like as academia, enterprises, and normal users who have various incentives to go to cloud. If the clients of cloud are academia, security result on computing performance and for this types of clients cloud provider's needs to discover a method to combine performance and security. In this research paper the more significant issue is security but with diverse vision. High performance might be not as dangerous for them as academia. In our paper, we design an efficient secure and verifiable outsourcing protocol for outsourcing data. We develop extended QP problem protocol for storing and outsourcing a data securely. To achieve the data security correctness, we validate the result returned through the cloud by Karush\_Kuhn\_Tucker conditions that are sufficient and necessary for the most favorable solution.
Cloud computing has a major role in the development of commercial systems. It enables companies like Microsoft, Amazon, IBM and Google to deliver their services on a large scale to its users. A cloud service provider manages cloud computing based services and applications. For any organization a cloud service provider (CSP) is an entity which works within it. So it suffers from vulnerabilities associated with organization, including internal and external attacks. So its challenge to organization to secure a cloud service provider while providing quality of service. Attribute based encryption can be used to provide data security with Key policy attribute based encryption (KP-ABE) or ciphertext policy attribute based encryption (CP-ABE). But these schemes has lack of scalability and flexibility. Hierarchical CP-ABE scheme is proposed here to provide fine grained access control. Data security is achieved using encryption, authentication and authorization mechanisms. Attribute key generation is proposed for implementing authorization of users. The proposed system is prevented by SQL Injection attack.
Cultivation of Smart Grid refurbish with brisk and ingenious. The delinquent breed and sow mutilate in massive. This state of affair coerces security as a sapling which incessantly is to be irrigated with Research and Analysis. The Cyber Security is endowed with resiliency to the SYN flooding induced Denial of Service attack in this work. The proposed secure web server algorithm embedded in the LPC1768 processor ensures the smart resources to be precluded from the attack.
Mobile cloud computing is a combination of mobile computing and cloud computing that provides a platform for mobile users to offload heavy tasks and data on the cloud, thus, helping them to overcome the limitations of their mobile devices. However, while utilizing the mobile cloud computing technology users lose physical control of their data; this ultimately calls for the need of a data security protocol. Although, numerous such protocols have been proposed,none of them consider a cloudlet based architecture. A cloudlet is a reliable, resource-rich computer/cluster which is well-connected to the internet and is available to nearby mobile devices. In this paper, we propose a data security protocol for a distributed cloud architecture having cloudlet integrated with the base station, using the property of perfect forward secrecy. Our protocol not only protects data from any unauthorized user, but also prevents exposure of data to the cloud owner.
With the scale of big data increasing in large-scale IoT application, fog computing is a recent computing paradigm that is extending cloud computing towards the edge of network in the field. There are a large number of storage resources placed on the edge of the network to form a geographical distributed storage system in fog computing system (FCS). It is used to store the big data collected by the fog computing nodes and to reduce the management costs for moving big data to the cloud. However, the storage of fog nodes at the edge of the network faces a direct attack of external threats. In order to improve the security of the storage of fog nodes in FCS, in this paper, we proposed a data security storage model for fog computing (FCDSSM) to realize the integration of storage and security management in large-scale IoT application. We designed a detail of the FCDSSM system architecture, gave a design of the multi-level trusted domain, cooperative working mechanism, data synchronization and key management strategy for the FCDSSM. Experimental results show that the loss of computing and communication performance caused by data security storage in the FCDSSM is within the acceptable range, and the FCDSSM has good scalability. It can be adapted to big data security storage in large-scale IoT application.
The ability to discover patterns of interest in criminal networks can support and ease the investigation tasks by security and law enforcement agencies. By considering criminal networks as a special case of social networks, we can properly reuse most of the state-of-the-art techniques to discover patterns of interests, i.e., hidden and potential links. Nevertheless, in time-sensible scenarios, like the one involving criminal actions, the ability to discover patterns in a (near) real-time manner can be of primary importance.In this paper, we investigate the identification of patterns for link detection and prediction on an evolving criminal network. To extract valuable information as soon as data is generated, we exploit a stream processing approach. To this end, we also propose three new similarity social network metrics, specifically tailored for criminal link detection and prediction. Then, we develop a flexible data stream processing application relying on the Apache Flink framework; this solution allows us to deploy and evaluate the newly proposed metrics as well as the ones existing in literature. The experimental results show that the new metrics we propose can reach up to 83% accuracy in detection and 82% accuracy in prediction, resulting competitive with the state of the art metrics.
The Industrial Internet promises to radically change and improve many industry's daily business activities, from simple data collection and processing to context-driven, intelligent and pro-active support of workers' everyday tasks and life. The present paper first provides insight into a typical industrial internet application architecture, then it highlights one fundamental arising contradiction: “Who owns the data is often not capable of analyzing it”. This statement is explained by imaging a visionary data supply chain that would realize some of the Industrial Internet promises. To concretely implement such a system, recent standards published by The Open Group are presented, where we highlight the characteristics that make them suitable for Industrial Internet applications. Finally, we discuss comparable solutions and concludes with new business use cases.
Data security has become an issue of increasing importance, especially for Web applications and distributed databases. One solution is using cryptographic algorithms whose improvement has become a constant concern. The increasing complexity of these algorithms involves higher execution times, leading to an application performance decrease. This paper presents a comparison of execution times for three algorithms using asymmetric keys, depending on the size of the encryption/decryption keys: RSA, ElGamal, and ECIES. For this algorithms comparison, a benchmark using Java APIs and an application for testing them on a test database was created.
A database is an organized collection of data. Though a number of techniques, such as encryption and electronic signatures, are currently available for the protection of data when transmitted across sites. Database security refers to the collective measures used to protect and secure a database or database management software from illegitimate use and malicious threats and attacks. In this paper, we create 6 types of method for more secure ways to store and retrieve database information that is both convenient and efficient. Confidentiality, integrity, and availability, also known as the CIA triad, is a model designed to guide policies for information security within the database. There are many cryptography techniques available among them, ECC is one of the most powerful techniques. A user wants to the data stores or request, the user needs to authenticate. When a user who is authenticated, he will get key from a key generator and then he must be data encrypt or decrypt within the database. Every keys store in a key generator and retrieve from the key generator. We use 256 bits of AES encryption for rows level encryption, columns level encryption, and elements level encryption for the database. Next two method is encrypted AES 256 bits random key by using 521 bits of ECC encryption and signature for rows level encryption and column level encryption. Last method is most secure method in this paper, which method is element level encryption with AES and ECC encryption for confidentiality and ECC signature use for every element within the database for integrity. As well as encrypting data at rest, it's also important to ensure confidential data are encrypted in motion over our network to protect against database signature security. The advantages of elements level are difficult for attack because the attacker gets a key that is lose only one element. The disadvantages need to thousands or millions of keys to manage.
Datacenter-based Cloud computing has induced new disruptive trends in networking, key among which is network virtualization. Software-Defined Networking overlays aim to improve the efficiency of the next generation multitenant datacenters. While early overlay prototypes are already available, they focus mainly on core functionality, with little being known yet about their impact on the system level performance. Using query completion time as our primary performance metric, we evaluate the overlay network impact on two representative datacenter workloads, Partition/Aggregate and 3-Tier. We measure how much performance is traded for overlay's benefits in manageability, security and policing. Finally, we aim to assist the datacenter architects by providing a detailed evaluation of the key overlay choices, all made possible by our accurate cross-layer hybrid/mesoscale simulation platform.
The usual approach to security for cloud-hosted applications is strong separation. However, it is often the case that the same data is used by different applications, particularly given the increase in data-driven (`big data' and IoT) applications. We argue that access control for the cloud should no longer be application-specific but should be data-centric, associated with the data that can flow between applications. Indeed, the data may originate outside cloud services from diverse sources such as medical monitoring, environmental sensing etc. Information Flow Control (IFC) potentially offers data-centric, system-wide data access control. It has been shown that IFC can be provided at operating system level as part of a PaaS offering, with an acceptable overhead. In this paper we consider how IFC can be integrated with application-specific access control, transparently from application developers, while building from simple IFC primitives, access control policies that align with the data management obligations of cloud providers and tenants.
We propose a novel phishing detection architecture based on transparent virtualization technologies and isolation of the own components. The architecture can be deployed as a security extension for virtual machines (VMs) running in the cloud. It uses fine-grained VM introspection (VMI) to extract, filter and scale a color-based fingerprint of web pages which are processed by a browser from the VM's memory. By analyzing the human perceptual similarity between the fingerprints, the architecture can reveal and mitigate phishing attacks which are based on redirection to spoofed web pages and it can also detect “Man-in-the-Browser” (MitB) attacks. To the best of our knowledge, the architecture is the first anti-phishing solution leveraging virtualization technologies. We explain details about the design and the implementation and we show results of an evaluation with real-world data.
User testing is often used to inform the development of user interfaces (UIs). But what if an interface needs to be developed for a system that does not yet exist? In that case, existing datasets can provide valuable input for UI development. We apply a data-driven approach to the development of a privacy-setting interface for Internet-of-Things (IoT) devices. Applying machine learning techniques to an existing dataset of users' sharing preferences in IoT scenarios, we develop a set of "smart" default profiles. Our resulting interface asks users to choose among these profiles, which capture their preferences with an accuracy of 82%—a 14% improvement over a naive default setting and a 12% improvement over a single smart default setting for all users.
Software metrics help developers discover and fix mistakes. However, despite promising empirical evidence, vulnerability discovery metrics are seldom relied upon in practice. In prior research, the effectiveness of these metrics has typically been expressed using precision and recall of a prediction model that uses the metrics as explanatory variables. These prediction models, being black boxes, may not be perceived as useful by developers. However, by systematically interpreting the models and metrics, we can provide developers with nuanced insights about factors that have led to security mistakes in the past. In this paper, we present a preliminary approach to using vulnerability discovery metrics to provide insightful feedback to developers as they engineer software. We collected ten metrics (churn, collaboration centrality, complexity, contribution centrality, nesting, known offender, source lines of code, \# inputs, \# outputs, and \# paths) from six open-source projects. We assessed the generalizability of the metrics across two contextual dimensions (application domain and programming language) and between projects within a domain, computed thresholds for the metrics using an unsupervised approach from literature, and assessed the ability of these unsupervised thresholds to classify risk from historical vulnerabilities in the Chromium project. The observations from this study feeds into our ongoing research to automatically aggregate insights from the various analyses to generate natural language feedback on security. We hope that our approach to generate automated feedback will accelerate the adoption of research in vulnerability discovery metrics.
The risk posed by insider threats has usually been approached by analyzing the behavior of users solely in the cyber domain. In this paper, we show the viability of using physical movement logs, collected via a building access control system, together with an understanding of the layout of the building housing the system’s assets, to detect malicious insider behavior that manifests itself in the physical domain. In particular, we propose a systematic framework that uses contextual knowledge about the system and its users, learned from historical data gathered from a building access control system, to select suitable models for representing movement behavior. We then explore the online usage of the learned models, together with knowledge about the layout of the building being monitored, to detect malicious insider behavior. Finally, we show the effectiveness of the developed framework using real-life data traces of user movement in railway transit stations.
Smart meters migrate conventional electricity grid into digitally enabled Smart Grid (SG), which is more reliable and efficient. Fine-grained energy consumption data collected by smart meters helps utility providers accurately predict users' demands and significantly reduce power generation cost, while it imposes severe privacy risks on consumers and may discourage them from using those “espionage meters". To enjoy the benefits of smart meter measured data without compromising the users' privacy, in this paper, we try to integrate distributed differential privacy (DDP) techniques into data-driven optimization, and propose a novel scheme that not only minimizes the cost for utility providers but also preserves the DDP of users' energy profiles. Briefly, we add differential private noises to the users' energy consumption data before the smart meters send it to the utility provider. Due to the uncertainty of the users' demand distribution, the utility provider aggregates a given set of historical users' differentially private data, estimates the users' demands, and formulates the data- driven cost minimization based on the collected noisy data. We also develop algorithms for feasible solutions, and verify the effectiveness of the proposed scheme through simulations using the simulated energy consumption data generated from the utility company's real data analysis.
Data-driven verification methods utilize execution data together with models for establishing safety requirements. These are often the only tools available for analyzing complex, nonlinear cyber-physical systems, for which purely model-based analysis is currently infeasible. In this chapter, we outline the key concepts and algorithmic approaches for data-driven verification and discuss the guarantees they provide. We introduce some of the software tools that embody these ideas and present several practical case studies demonstrating their application in safety analysis of autonomous vehicles, advanced driver assist systems (ADAS), satellite control, and engine control systems.