Biblio
Realizing the benefits of SDN for many network management applications (e.g., traffic engineering, service chaining, topology reconfiguration) involves addressing complex optimizations that are central to these problems. Unfortunately, such optimization problems require (a) significant manual effort and expertise to express and (b) non-trivial computation and/or carefully crafted heuristics to solve. Our goal is to simplify the deployment of SDN applications using general high-level abstractions for capturing optimization requirements from which we can efficiently generate optimal solutions. To this end, we present SOL, a framework that demonstrates that it is possible to simultaneously achieve generality and efficiency. The insight underlying SOL is that many SDN applications can be recast within a unifying path-based optimization abstraction. Using this, SOL can efficiently generate near-optimal solutions and device configurations to implement them. We show that SOL provides comparable or better scalability than custom optimization solutions for diverse applications, allows a balancing of optimality and route churn per reconfiguration, and interfaces with modern SDN controllers.
To appear
The video streaming between the sender and the receiver involves multiple unsecured hops where the video data can be illegally copied if the nodes run malicious forwarding logic. This paper introduces a novel method to stream video data through dual channels using dual data paths. The frames' pixels are also scrambled. The video frames are divided into two frame streams. At the receiver side video is re-constructed and played for a limited time period. As soon as small chunk of merged video is played, it is deleted from video buffer. The approach has been tried to formalize and initial simulation has been done over MATLAB. Preliminary results are optimistic and a refined approach may lead to a formal designing of network layer routing protocol with corrections in transport layer.
Cloud Computing is one of the large and essential environment now a days to work for the storage collection and privacy preserve to that data. Cloud data security is most important and major concern for the client while use of the cloud services provided by the different service providers. There can be some major security concern and conflicts between the client and the service provider. To get out from those issues, a third party auditor uses as an auditor for assurance of data in the environment. Storage systems for the cloud has many fundamental challenges still today. All basic as well critical challenges among which storage space and security is generally the top concern in the cloud environment. To give the appropriate security issues we have proposed third party authentication system. The cloud not only for the simplified data storage but also secure data acquisition in cloud environment. At last we have perform different security analysis as well performance analysis. It give the results that proposed scheme has significant increases in efficiency for maintaining highly secure data storage and acquisition. The proposed method also helps to minimize the cost in environment and also increases communication efficiency in the cloud environment.
Now a day's cloud computing is power station to run multiple businesses. It is cumulating more and more users every day. Database-as-a-service is service model provided by cloud computing to store, manage and process data on a cloud platform. Database-as-a-service has key characteristics such as availability, scalability, elasticity. A customer does not have to worry about database installation and management. As a replacement, the cloud database service provider takes responsibility for installing and maintaining the database. The real problem occurs when it comes to storing confidential or private information in the cloud database, we cannot rely on the cloud data vendor. A curious cloud database vendor may capture and leak the secret information. For that purpose, Protected Database-as-a-service is a novel solution to this problem that provides provable and pragmatic privacy in the face of a compromised cloud database service provider. Protected Database-as-a-service defines various encryption schemes to choose encryption algorithm and encryption key to encrypt and decrypt data. It also provides "Master key" to users, so that a metadata storage table can be decrypted only by using the master key of the users. As a result, a cloud service vendor never gets access to decrypted data, and even if all servers are jeopardized, in such inauspicious circumstances a cloud service vendor will not be able to decrypt the data. Proposed Protected Database-as-a-service system allows multiple geographically distributed clients to execute concurrent and independent operation on encrypted data and also conserve data confidentiality and consistency at cloud level, to eradicate any intermediate server between the client and the cloud database.
During recent years, establishing proper metrics for measuring system security has received increasing attention. Security logs contain vast amounts of information which are essential for creating many security metrics. Unfortunately, security logs are known to be very large, making their analysis a difficult task. Furthermore, recent security metrics research has focused on generic concepts, and the issue of collecting security metrics with log analysis methods has not been well studied. In this paper, we will first focus on using log analysis techniques for collecting technical security metrics from security logs of common types (e.g., Network IDS alarm logs, workstation logs, and Net flow data sets). We will also describe a production framework for collecting and reporting technical security metrics which is based on novel open-source technologies for big data.
Collaboration between humans and machines does not necessarily lead to better outcomes.
In an attempt to coerce useful information about the behavior of new malware families, threat analysts commonly force newly collected malicious software samples to run within a sandboxed environment. The main goal is to gather intelligence that can later be leveraged to detect and enumerate new malware infections within a network. Currently, most analysis environments "blindly" execute each newly collected malware sample for a predetermined amount of time (e.g., four to five minutes). However, a large majority of malware samples that are forced through sandbox execution are simply repackaged versions of previously seen (and already analyzed) malware. Consequently, a significant amount of time may be wasted in analyzing samples that do not generate new intelligence. In this paper, we propose MAXS, a novel probabilistic multi-hypothesis testing framework for scaling execution in malware analysis environments, including bare-metal execution environments. Our main goal is to automatically recognize whether a malware sample that is undergoing dynamic analysis has likely been seen before (e.g., in a "differently packed" form), and determine if we could therefore stop its execution early while avoiding loss of valuable malware intelligence (e.g., without missing DNS queries to never-before-seen malware command-and-control domains). We have tested our prototype implementation of MAXS over two large collections of malware execution traces obtained from two distinct production-level analysis environments. Our experimental results show that using MAXS we are able to reduce malware execution time by up to 50% in average, with less than 0.3% information loss. This roughly translates into the ability to double the capacity of malware sandbox environments, thus significantly optimizing the resources dedicated to malware execution and analysis. Our results are particularly important for bare-metal execution environments, in which it is not easy to leverage the economies of scale that characterize virtual-machine or emulation based malware sandboxes. For example, MAXS could be used to significantly cut the cost of bare-metal analysis environments by reducing the hardware resources needed to analyze a predetermined daily number of new malware samples.
In this paper, we consider ways of organizing group authentication, as well as the features of constructing the isogeny of elliptic curves. The work includes the study of isogeny graphs and their application in postquantum systems. A hierarchical group authentication scheme has been developed using transformations based on the search for isogeny of elliptic curves.
A multipurpose color image watermarking method is presented to provide \textcopyright protection and ownership verification of the multimedia information. For robust color image watermarking, color watermark is utilized to bring universality and immense applicability to the proposed scheme. The cover information is first converted to Red, Green and Blue components image. Each component is transformed in wavelet domain using DWT (Discrete Wavelet Transform) and then decomposition techniques like Singular Value Decomposition (SVD), QR and Schur decomposition are applied. Multiple watermark embedding provides the watermarking scheme free from error (false positive). The watermark is modified by scrambling it using Arnold transform. In the proposed watermarking scheme, robustness and quality is tested with metrics like Peak Signal to Noise Ratio (PSNR) and Normalized Correlation Coefficient (NCC). Further, the proposed scheme is compared with related watermarking schemes.
This paper presents an integrated Analog Delay Line (ADL) for analog RF signal processing. The design is inspired by a Bucket Brigade Device (BBD) structure. It transfers charges from a sampled input signal stage after stage. It belongs to the Charge Coupled Devices (CCD). This ADL is fully differential with Common Mode (CM) control. The 28nm Fully Depleted Silicon on Insulator (FDSOI) Technology from ST Microelectronics is used for the design. Further results come from simulations using Spectre Cadence.
Traditional risk management produces a rather static listing of weaknesses, probabilities and mitigations. Large share of cyber security risks realize through computer networks. These attacks or attack attempts produce events that are detected by various monitoring techniques such as Intrusion Detection Systems (IDS). Often the link between detecting these potentially dangerous real-time events and risk management process is lacking, or completely missing. This paper presents means for transferring and visualizing the network events in the risk management instantly with a tool called Metrics Visualization System (MVS). The tool is used to dynamically visualize network security events of a Terrestrial Trunked Radio (TETRA) network running in Software Defined Networking (SDN) context as a case study. Visualizations are presented with a treelike graph, that gives a quick easily understandable overview of the cyber security situation. This paper also discusses what network security events are monitored and how they affect the more general risk levels. The major benefit of this approach is that the risk analyst is able to map the designed risk tree/security metrics into actual real-time events and view the system's security posture with the help of a runtime visualization view.
Trust is known to be a key component in human social relationships. It is trust that defines human behavior with others to a large extent. Generative models have been extensively used in social networks study to simulate different characteristics and phenomena in social graphs. In this work, an attempt is made to understand how trust in social graphs can be combined with generative modeling techniques to generate trust-based social graphs. These generated social graphs are then compared with the original social graphs to evaluate how trust helps in generative modeling. Two well-known social network data sets i.e. the soc-Bitcoin and the wiki administrator network data sets are used in this work. Social graphs are generated from these data sets and then compared with the original graphs along with other standard generative modeling techniques to see how trust is a good component in this. Other Generative modeling techniques have been available for a while but this investigation with the real social graph data sets validate that trust can be an important factor in generative modeling.
The increasing complexity of cyber-attacks necessitates the design of more efficient hardware architectures for real-time Intrusion Detection Systems (IDSs). String matching is the main performance-demanding component of an IDS. An effective technique to design high-performance string matching engines is to partition the target set of strings into multiple subgroups and to use a parallel string matching hardware unit for each subgroup. This paper introduces a novel pattern grouping algorithm for heterogeneous bit-split string matching architectures. The proposed algorithm presents a reliable method to estimate the correlation between strings. The correlation factors are then used to find a preferred group for each string in a seed growing approach. Experimental results demonstrate that the proposed algorithm achieves an average of 41% reduction in memory consumption compared to the best existing approach found in the literature, while offering orders of magnitude faster execution time compared to an exhaustive search.
With the growing number of cyberattack incidents, organizations are required to have proactive knowledge on the cybersecurity landscape for efficiently defending their resources. To achieve this, organizations must develop the culture of sharing their threat information with others for effectively assessing the associated risks. However, sharing cybersecurity information is costly for the organizations due to the fact that the information conveys sensitive and private data. Hence, making the decision for sharing information is a challenging task and requires to resolve the trade-off between sharing advantages and privacy exposure. On the other hand, cybersecurity information exchange (CYBEX) management is crucial in stabilizing the system through setting the correct values for participation fees and sharing incentives. In this work, we model the interaction of organizations, CYBEX, and attackers involved in a sharing system using dynamic game. With devising appropriate payoff models for each player, we analyze the best strategies of the entities by incorporating the organizations' privacy component in the sharing model. Using the best response analysis, the simulation results demonstrate the efficiency of our proposed framework.
We survey elliptic curve implementations from several vantage points. We perform internet-wide scans for TLS on a large number of ports, as well as SSH and IPsec to measure elliptic curve support and implementation behaviors, and collect passive measurements of client curve support for TLS. We also perform active measurements to estimate server vulnerability to known attacks against elliptic curve implementations, including support for weak curves, invalid curve attacks, and curve twist attacks. We estimate that 1.53% of HTTPS hosts, 0.04% of SSH hosts, and 4.04% of IKEv2 hosts that support elliptic curves do not perform curve validity checks as specified in elliptic curve standards. We describe how such vulnerabilities could be used to construct an elliptic curve parameter downgrade attack called CurveSwap for TLS, and observe that there do not appear to be combinations of weak behaviors we examined enabling a feasible CurveSwap attack in the wild. We also analyze source code for elliptic curve implementations, and find that a number of libraries fail to perform point validation for JSON Web Encryption, and find a flaw in the Java and NSS multiplication algorithms.
We analyze the security practices of three smart toys that communicate with children through voice commands. We show the general communication architecture, and some general security and privacy practices by each of the devices. Then we focus on the analysis of one particular toy, and show how attackers can decrypt communications to and from a target device, and perhaps more worryingly, the attackers can also inject audio into the toy so the children listens to any arbitrary audio file the attacker sends to the toy. This last attack raises new safety concerns that manufacturers of smart toys should prevent.
Network architectures and applications are becoming increasingly complex. Several approaches to automatically enforce configurations on devices, applications and services have been proposed, such as Policy-Based Network Management (PBNM). However, the management of enforced configurations in production environments (e.g. data center) is a crucial and complex task. For example, updates on firewall configuration to change a set of rules. Although this task is fundamental for complex systems, few effective solutions have been proposed for monitoring and managing enforced configurations. This work proposes a novel approach to monitor and manage enforced configurations in production environments. The main contributions of this paper are a formal model to identify/ generate traffic flows and to verify the enforced configurations; and a slim and transparent framework to perform the policy assessment. We have implemented and validated our approach in a virtual environment in order to evaluate different scenarios. The results demonstrate that the prototype is effective and has good performance, therefore our model can be effectively used to analyse several types of IT infrastructures. A further interesting result is that our approach is complementary to PBNM.
We show that a class of statistical properties of distributions, which includes such practically relevant properties as entropy, the number of distinct elements, and distance metrics between pairs of distributions, can be estimated given a sublinear sized sample. Specifically, given a sample consisting of independent draws from any distribution over at most k distinct elements, these properties can be estimated accurately using a sample of size O(k log k). For these estimation tasks, this performance is optimal, to constant factors. Complementing these theoretical results, we also demonstrate that our estimators perform exceptionally well, in practice, for a variety of estimation tasks, on a variety of natural distributions, for a wide range of parameters. The key step in our approach is to first use the sample to characterize the ``unseen'' portion of the distribution—effectively reconstructing this portion of the distribution as accurately as if one had a logarithmic factor larger sample. This goes beyond such tools as the Good-Turing frequency estimation scheme, which estimates the total probability mass of the unobserved portion of the distribution: We seek to estimate the shape of the unobserved portion of the distribution. This work can be seen as introducing a robust, general, and theoretically principled framework that, for many practical applications, essentially amplifies the sample size by a logarithmic factor; we expect that it may be fruitfully used as a component within larger machine learning and statistical analysis systems.
In this paper we conduct an empirical study with the purpose of identifying common software weaknesses of embedded devices used as part of industrial control systems in power grids. The data is gathered about the devices and software of 6 companies, ABB, General Electric, Schneider Electric, Schweitzer Engineering Laboratories, Siemens and Wind River. The study uses data from the manufacturersfi online databases, NVD, CWE and ICS CERT. We identified that the most common problems that were reported are related to the improper input validation, cryptographic issues, and programming errors.
We present a demonstration of the ARIA framework, a modular approach for rapid development of virtual humans for information retrieval that have linguistic, emotional, and social skills and a strong personality. We demonstrate the framework's capabilities in a scenario where `Alice in Wonderland', a popular English literature book, is embodied by a virtual human representing Alice. The user can engage in an information exchange dialogue, where Alice acts as the expert on the book, and the user as an interested novice. Besides speech recognition, sophisticated audio-visual behaviour analysis is used to inform the core agent dialogue module about the user's state and intentions, so that it can go beyond simple chat-bot dialogue. The behaviour generation module features a unique new capability of being able to deal gracefully with interruptions of the agent.