Biblio
Kerberos is a third party and widely used authentication protocol, in which it enables computers to connect securely using a single sign-on over an insecure channel. It proves the identity of clients and encrypts all the communications between them to ensure data privacy and integrity. Typically, Kerberos composes of three communication phases to establish a secure session between any two clients. The authentication is based on a password-based scheme, in which it is a secret long-term key shared between the client and the Kerberos. Therefore, Kerberos suffers from a password-guessing attack, the main drawback of Kerberos. In this paper, we overcome this limitation by modifying the first initial phase using the virtual password and biometric data. In addition, the proposed protocol provides a strong authentication scenario against multiple types of attacks.
Todays' era of internet-of-things, cloud computing and big data centers calls for more fresh graduates with expertise in digital data processing techniques such as compression, encryption and error correcting codes. This paper describes a project-based elective that covers these three main digital data processing techniques and can be offered to three different undergraduate majors electrical and computer engineering and computer science. The course has been offered successfully for three years. Registration statistics show equal interest from the three different majors. Assessment data show that students have successfully completed the different course outcomes. Students' feedback show that students appreciate the knowledge they attain from this elective and suggest that the workload for this course in relation to other courses of equal credit is as expected.
Advanced Persistent Threat (APT) is a complex (Advanced) cyber-attack (Threat) against specific targets over long periods of time (Persistent) carried out by nation states or terrorist groups with highly sophisticated levels of expertise to establish entries into organizations, which are critical to a country's socio-economic status. The key identifier in such persistent threats is that patterns are long term, could be high priority, and occur consistently over a period of time. This paper focuses on identifying persistent threat patterns in network data, particularly data collected from Intrusion Detection Systems. We utilize Association Rule Mining (ARM) to detect persistent threat patterns on network data. We identify potential persistent threat patterns, which are frequent but at the same time unusual as compared with the other frequent patterns.
Online privacy policies notify users of a Website how their personal information is collected, processed and stored. Against the background of rising privacy concerns, privacy policies seem to represent an influential instrument for increasing customer trust and loyalty. However, in practice, consumers seem to actually read privacy policies only in rare cases, possibly reflecting the common assumption stating that policies are hard to comprehend. By designing and implementing an automated extraction and readability analysis toolset that embodies a diversity of established readability measures, we present the first large-scale study that provides current empirical evidence on the readability of nearly 50,000 privacy policies of popular English-speaking Websites. The results empirically confirm that on average, current privacy policies are still hard to read. Furthermore, this study presents new theoretical insights for readability research, in particular, to what extent practical readability measures are correlated. Specifically, it shows the redundancy of several well-established readability metrics such as SMOG, RIX, LIX, GFI, FKG, ARI, and FRES, thus easing future choice making processes and comparisons between readability studies, as well as calling for research towards a readability measures framework. Moreover, a more sophisticated privacy policy extractor and analyzer as well as a solid policy text corpus for further research are provided.
In order to integrate equipment from different vendors, wireless sensor networks need to become more standardized. Using IP as the basis of low power radio networks, together with application layer standards designed for this purpose is one way forward. This research focuses on implementing and deploying a system using Contiki, 6LoWPAN over an 868 MHz radio network, together with CoAP as a standard application layer protocol. A system was deployed in the Cairngorm mountains in Scotland as an environmental sensor network, measuring streams, temperature profiles in peat and periglacial features. It was found that RPL provided an effective routing algorithm, and that the use of UDP packets with CoAP proved to be an energy efficient application layer. This combination of technologies can be very effective in large area sensor networks.
This work proposes a novel approach to infer and characterize Internet-scale DNS amplification DDoS attacks by leveraging the darknet space. Complementary to the pioneer work on inferring Distributed Denial of Service (DDoS) using darknet, this work shows that we can extract DDoS activities without relying on backscattered analysis. The aim of this work is to extract cyber security intelligence related to DNS Amplification DDoS activities such as detection period, attack duration, intensity, packet size, rate and geo- location in addition to various network-layer and flow-based insights. To achieve this task, the proposed approach exploits certain DDoS parameters to detect the attacks. We empirically evaluate the proposed approach using 720 GB of real darknet data collected from a /13 address space during a recent three months period. Our analysis reveals that the approach was successful in inferring significant DNS amplification DDoS activities including the recent prominent attack that targeted one of the largest anti-spam organizations. Moreover, the analysis disclosed the mechanism of such DNS amplification DDoS attacks. Further, the results uncover high-speed and stealthy attempts that were never previously documented. The case study of the largest DDoS attack in history lead to a better understanding of the nature and scale of this threat and can generate inferences that could contribute in detecting, preventing, assessing, mitigating and even attributing of DNS amplification DDoS activities.
This work proposes a novel approach to infer and characterize Internet-scale DNS amplification DDoS attacks by leveraging the darknet space. Complementary to the pioneer work on inferring Distributed Denial of Service (DDoS) using darknet, this work shows that we can extract DDoS activities without relying on backscattered analysis. The aim of this work is to extract cyber security intelligence related to DNS Amplification DDoS activities such as detection period, attack duration, intensity, packet size, rate and geo- location in addition to various network-layer and flow-based insights. To achieve this task, the proposed approach exploits certain DDoS parameters to detect the attacks. We empirically evaluate the proposed approach using 720 GB of real darknet data collected from a /13 address space during a recent three months period. Our analysis reveals that the approach was successful in inferring significant DNS amplification DDoS activities including the recent prominent attack that targeted one of the largest anti-spam organizations. Moreover, the analysis disclosed the mechanism of such DNS amplification DDoS attacks. Further, the results uncover high-speed and stealthy attempts that were never previously documented. The case study of the largest DDoS attack in history lead to a better understanding of the nature and scale of this threat and can generate inferences that could contribute in detecting, preventing, assessing, mitigating and even attributing of DNS amplification DDoS activities.
When implemented on real systems, cryptographic algorithms are vulnerable to attacks observing their execution behavior, such as cache-timing attacks. Designing protected implementations must be done with knowledge and validation tools as early as possible in the development cycle. In this article we propose a methodology to assess the robustness of the candidates for the NIST post-quantum standardization project to cache-timing attacks. To this end we have developed a dedicated vulnerability research tool. It performs a static analysis with tainting propagation of sensitive variables across the source code and detects leakage patterns. We use it to assess the security of the NIST post-quantum cryptography project submissions. Our results show that more than 80% of the analyzed implementations have at least one potential flaw, and three submissions total more than 1000 reported flaws each. Finally, this comprehensive study of the competitors security allows us to identify the most frequent weaknesses amongst candidates and how they might be fixed.
As chips become more and more connected, they are more exposed (both to network and to physical attacks). Therefore one shall ensure they enjoy a sufficient protection level. Security within chips is accordingly becoming a hot topic. Incident detection and reporting is one novel function expected from chips. In this talk, we explain why it is worthwhile to resort to Artificial Intelligence (AI) for security event handling. Drivers are the need to aggregate multiple and heterogeneous security sensors, the need to digest this information quickly to produce exploitable information, and so while maintaining a low false positive detection rate. Key features are adequate learning procedures and fast and secure classification accelerated by hardware. A challenge is to embed such security-oriented AI logic, while not compromising chip power budget and silicon area. This talk accounts for the opportunities permitted by the symbiotic encounter between chip security and AI.
Motivated by the study of matrix elimination orderings in combinatorial scientific computing, we utilize graph sketching and local sampling to give a data structure that provides access to approximate fill degrees of a matrix undergoing elimination in polylogarithmic time per elimination and query. We then study the problem of using this data structure in the minimum degree algorithm, which is a widely-used heuristic for producing elimination orderings for sparse matrices by repeatedly eliminating the vertex with (approximate) minimum fill degree. This leads to a nearly-linear time algorithm for generating approximate greedy minimum degree orderings. Despite extensive studies of algorithms for elimination orderings in combinatorial scientific computing, our result is the first rigorous incorporation of randomized tools in this setting, as well as the first nearly-linear time algorithm for producing elimination orderings with provable approximation guarantees. While our sketching data structure readily works in the oblivious adversary model, by repeatedly querying and greedily updating itself, it enters the adaptive adversarial model where the underlying sketches become prone to failure due to dependency issues with their internal randomness. We show how to use an additional sampling procedure to circumvent this problem and to create an independent access sequence. Our technique for decorrelating interleaved queries and updates to this randomized data structure may be of independent interest.
Event logs that originate from information systems enable comprehensive analysis of business processes, e.g., by process model discovery. However, logs potentially contain sensitive information about individual employees involved in process execution that are only partially hidden by an obfuscation of the event data. In this paper, we therefore address the risk of privacy-disclosure attacks on event logs with pseudonymized employee information. To this end, we introduce PRETSA, a novel algorithm for event log sanitization that provides privacy guarantees in terms of k-anonymity and t-closeness. It thereby avoids disclosure of employee identities, their membership in the event log, and their characterization based on sensitive attributes, such as performance information. Through step-wise transformations of a prefix-tree representation of an event log, we maintain its high utility for discovery of a performance-annotated process model. Experiments with real-world data demonstrate that sanitization with PRETSA yields event logs of higher utility compared to methods that exploit frequency-based filtering, while providing the same privacy guarantees.
Humans are majorly identified as the weakest link in cybersecurity. Tertiary institution students undergo lot of cybersecurity issues due to their constant Internet exposure, however there is a lack in literature with regards to tertiary institution students' cybersecurity behaviors. This research aimed at linking the factors responsible for tertiary institutions students' cybersecurity behavior, via validated cybersecurity factors, Perceived Vulnerability (PV); Perceived Barriers (PBr); Perceived Severity (PS); Security Self-Efficacy (SSE); Response Efficacy (RE); Cues to Action (CA); Peer Behavior (PBhv); Computer Skills (CS); Internet Skills (IS); Prior Experience with Computer Security Practices (PE); Perceived Benefits (PBnf); Familiarity with Cyber-Threats (FCT), thus exploring the relationship between the factors and the students' Cybersecurity Behaviors (CSB). A cross-sectional online survey was used to gather data from 450 undergraduate and postgraduate students from tertiary institutions within Klang Valley, Malaysia. Correlation Analysis was used to find the relationships existing among the cybersecurity behavioral factors via SPSS version 25. Results indicate that all factors were significantly related to the cybersecurity behaviors of the students apart from Perceived Severity. Practically, the study instigates the need for more cybersecurity training and practices in the tertiary institutions.
Enhanced situational awareness is integral to risk management and response evaluation. Dynamic systems that incorporate both hard and soft data sources allow for comprehensive situational frameworks which can supplement physical models with conceptual notions of risk. The processing of widely available semi-structured textual data sources can produce soft information that is readily consumable by such a framework. In this paper, we augment the situational awareness capabilities of a recently proposed risk management framework (RMF) with the incorporation of soft data. We illustrate the beneficial role of the hard-soft data fusion in the characterization and evaluation of potential vessels in distress within Maritime Domain Awareness (MDA) scenarios. Risk features pertaining to maritime vessels are defined a priori and then quantified in real time using both hard (e.g., Automatic Identification System, Douglas Sea Scale) as well as soft (e.g., historical records of worldwide maritime incidents) data sources. A risk-aware metric to quantify the effectiveness of the hard-soft fusion process is also proposed. Though illustrated with MDA scenarios, the proposed hard-soft fusion methodology within the RMF can be readily applied to other domains.
Security in virtualised environments is becoming increasingly important for institutions, not only for a firm's own on-site servers and network but also for data and sites that are hosted in the cloud. Today, security is either handled globally by the cloud provider, or each customer needs to invest in its own security infrastructure. This paper proposes a Virtual Security Operation Center (VSOC) that allows to collect, analyse and visualize security related data from multiple sources. For instance, a user can forward log data from its firewalls, applications and routers in order to check for anomalies and other suspicious activities. The security analytics provided by the VSOC are comparable to those of commercial security incident and event management (SIEM) solutions, but are deployed as a cloud-based solution with the additional benefit of using big data processing tools to handle large volumes of data. This allows us to detect more complex attacks that cannot be detected with todays signature-based (i.e. rules) SIEM solutions.
Recent advances in the automated analysis of cryptographic protocols have aroused new interest in the practical application of unification modulo theories, especially theories that describe the algebraic properties of cryptosystems. However, this application requires unification algorithms that can be easily implemented and easily extended to combinations of different theories of interest. In practice this has meant that most tools use a version of a technique known as variant unification. This requires, among other things, that the theory be decomposable into a set of axioms B and a set of rewrite rules R such that R has the finite variant property with respect to B. Most theories that arise in cryptographic protocols have decompositions suitable for variant unification, but there is one major exception: the theory that describes encryption that is homomorphic over an Abelian group.
In this paper we address this problem by studying various approximations of homomorphic encryption over an Abelian group. We construct a hierarchy of increasingly richer theories, taking advantage of new results that allow us to automatically verify that their decompositions have the finite variant property. This new verification procedure also allows us to construct a rough metric of the complexity of a theory with respect to variant unification, or variant complexity. We specify different versions of protocols using the different theories, and analyze them in the Maude-NPA cryptographic protocol analysis tool to assess their behavior. This gives us greater understanding of how the theories behave in actual application, and suggests possible techniques for improving performance.
This paper established a bi-level programming model for reactive power optimization, considering the feature of the grid voltage-reactive power control. The targets of upper-level and lower-level are minimization of grid loss and voltage deviation, respectively. According to the differences of two level, such as different variables, different solution space, primal-dual interior point algorithm is suggested to be used in upper-level, which takes continuous variables in account such as active power source and reactive power source. Upper-level model guaranteed the sufficient of the reactive power in power system. And then in lower-level the discrete variables such as taps are optimized by random forests algorithm (RFA), which regulate the voltage in a feasible range. Finally, a case study illustrated the speediness and robustness of this method.
The purpose of the General Data Protection Regulation (GDPR) is to provide improved privacy protection. If an app controls personal data from users, it needs to be compliant with GDPR. However, GDPR lists general rules rather than exact step-by-step guidelines about how to develop an app that fulfills the requirements. Therefore, there may exist GDPR compliance violations in existing apps, which would pose severe privacy threats to app users. In this paper, we take mobile health applications (mHealth apps) as a peephole to examine the status quo of GDPR compliance in Android apps. We first propose an automated system, named HPDROID, to bridge the semantic gap between the general rules of GDPR and the app implementations by identifying the data practices declared in the app privacy policy and the data relevant behaviors in the app code. Then, based on HPDROID, we detect three kinds of GDPR compliance violations, including the incompleteness of privacy policy, the inconsistency of data collections, and the insecurity of data transmission. We perform an empirical evaluation of 796 mHealth apps. The results reveal that 189 (23.7%) of them do not provide complete privacy policies. Moreover, 59 apps collect sensitive data through different measures, but 46 (77.9%) of them contain at least one inconsistent collection behavior. Even worse, among the 59 apps, only 8 apps try to ensure the transmission security of collected data. However, all of them contain at least one encryption or SSL misuse. Our work exposes severe privacy issues to raise awareness of privacy protection for app users and developers.