Visible to the public Biblio

Found 227 results

Filters: First Letter Of Last Name is F  [Clear All Filters]
A B C D E [F] G H I J K L M N O P Q R S T U V W X Y Z   [Show ALL]
F
F, A. K., Mhaibes, H. Imad.  2018.  A New Initial Authentication Scheme for Kerberos 5 Based on Biometric Data and Virtual Password. 2018 International Conference on Advanced Science and Engineering (ICOASE). :280–285.

Kerberos is a third party and widely used authentication protocol, in which it enables computers to connect securely using a single sign-on over an insecure channel. It proves the identity of clients and encrypts all the communications between them to ensure data privacy and integrity. Typically, Kerberos composes of three communication phases to establish a secure session between any two clients. The authentication is based on a password-based scheme, in which it is a secret long-term key shared between the client and the Kerberos. Therefore, Kerberos suffers from a password-guessing attack, the main drawback of Kerberos. In this paper, we overcome this limitation by modifying the first initial phase using the virtual password and biometric data. In addition, the proposed protocol provides a strong authentication scenario against multiple types of attacks.

F. Hassan, J. L. Magalini, V. de Campos Pentea, R. A. Santos.  2015.  "A project-based multi-disciplinary elective on digital data processing techniques". 2015 IEEE Frontiers in Education Conference (FIE). :1-7.

Todays' era of internet-of-things, cloud computing and big data centers calls for more fresh graduates with expertise in digital data processing techniques such as compression, encryption and error correcting codes. This paper describes a project-based elective that covers these three main digital data processing techniques and can be offered to three different undergraduate majors electrical and computer engineering and computer science. The course has been offered successfully for three years. Registration statistics show equal interest from the three different majors. Assessment data show that students have successfully completed the different course outcomes. Students' feedback show that students appreciate the knowledge they attain from this elective and suggest that the workload for this course in relation to other courses of equal credit is as expected.

F. Quader, V. Janeja, J. Stauffer.  2015.  "Persistent threat pattern discovery". 2015 IEEE International Conference on Intelligence and Security Informatics (ISI). :179-181.

Advanced Persistent Threat (APT) is a complex (Advanced) cyber-attack (Threat) against specific targets over long periods of time (Persistent) carried out by nation states or terrorist groups with highly sophisticated levels of expertise to establish entries into organizations, which are critical to a country's socio-economic status. The key identifier in such persistent threats is that patterns are long term, could be high priority, and occur consistently over a period of time. This paper focuses on identifying persistent threat patterns in network data, particularly data collected from Intrusion Detection Systems. We utilize Association Rule Mining (ARM) to detect persistent threat patterns on network data. We identify potential persistent threat patterns, which are frequent but at the same time unusual as compared with the other frequent patterns.

Fabian, Benjamin, Ermakova, Tatiana, Lentz, Tino.  2017.  Large-Scale Readability Analysis of Privacy Policies. Proceedings of the International Conference on Web Intelligence. :18–25.

Online privacy policies notify users of a Website how their personal information is collected, processed and stored. Against the background of rising privacy concerns, privacy policies seem to represent an influential instrument for increasing customer trust and loyalty. However, in practice, consumers seem to actually read privacy policies only in rare cases, possibly reflecting the common assumption stating that policies are hard to comprehend. By designing and implementing an automated extraction and readability analysis toolset that embodies a diversity of established readability measures, we present the first large-scale study that provides current empirical evidence on the readability of nearly 50,000 privacy policies of popular English-speaking Websites. The results empirically confirm that on average, current privacy policies are still hard to read. Furthermore, this study presents new theoretical insights for readability research, in particular, to what extent practical readability measures are correlated. Specifically, it shows the redundancy of several well-established readability metrics such as SMOG, RIX, LIX, GFI, FKG, ARI, and FRES, thus easing future choice making processes and comparisons between readability studies, as well as calling for research towards a readability measures framework. Moreover, a more sophisticated privacy policy extractor and analyzer as well as a solid policy text corpus for further research are provided.

Fabre, Arthur, Martinez, Kirk, Bragg, Graeme M., Basford, Philip J., Hart, Jane, Bader, Sebastian, Bragg, Olivia M..  2016.  Deploying a 6LoWPAN, CoAP, Low Power, Wireless Sensor Network: Poster Abstract. Proceedings of the 14th ACM Conference on Embedded Network Sensor Systems CD-ROM. :362–363.

In order to integrate equipment from different vendors, wireless sensor networks need to become more standardized. Using IP as the basis of low power radio networks, together with application layer standards designed for this purpose is one way forward. This research focuses on implementing and deploying a system using Contiki, 6LoWPAN over an 868 MHz radio network, together with CoAP as a standard application layer protocol. A system was deployed in the Cairngorm mountains in Scotland as an environmental sensor network, measuring streams, temperature profiles in peat and periglacial features. It was found that RPL provided an effective routing algorithm, and that the use of UDP packets with CoAP proved to be an energy efficient application layer. This combination of technologies can be very effective in large area sensor networks.

Fachkha, C., Bou-Harb, E., Debbabi, M..  2014.  Fingerprinting Internet DNS Amplification DDoS Activities. New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on. :1-5.

This work proposes a novel approach to infer and characterize Internet-scale DNS amplification DDoS attacks by leveraging the darknet space. Complementary to the pioneer work on inferring Distributed Denial of Service (DDoS) using darknet, this work shows that we can extract DDoS activities without relying on backscattered analysis. The aim of this work is to extract cyber security intelligence related to DNS Amplification DDoS activities such as detection period, attack duration, intensity, packet size, rate and geo- location in addition to various network-layer and flow-based insights. To achieve this task, the proposed approach exploits certain DDoS parameters to detect the attacks. We empirically evaluate the proposed approach using 720 GB of real darknet data collected from a /13 address space during a recent three months period. Our analysis reveals that the approach was successful in inferring significant DNS amplification DDoS activities including the recent prominent attack that targeted one of the largest anti-spam organizations. Moreover, the analysis disclosed the mechanism of such DNS amplification DDoS attacks. Further, the results uncover high-speed and stealthy attempts that were never previously documented. The case study of the largest DDoS attack in history lead to a better understanding of the nature and scale of this threat and can generate inferences that could contribute in detecting, preventing, assessing, mitigating and even attributing of DNS amplification DDoS activities.

Fachkha, C., Bou-Harb, E., Debbabi, M..  2014.  Fingerprinting Internet DNS Amplification DDoS Activities. New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on. :1-5.

This work proposes a novel approach to infer and characterize Internet-scale DNS amplification DDoS attacks by leveraging the darknet space. Complementary to the pioneer work on inferring Distributed Denial of Service (DDoS) using darknet, this work shows that we can extract DDoS activities without relying on backscattered analysis. The aim of this work is to extract cyber security intelligence related to DNS Amplification DDoS activities such as detection period, attack duration, intensity, packet size, rate and geo- location in addition to various network-layer and flow-based insights. To achieve this task, the proposed approach exploits certain DDoS parameters to detect the attacks. We empirically evaluate the proposed approach using 720 GB of real darknet data collected from a /13 address space during a recent three months period. Our analysis reveals that the approach was successful in inferring significant DNS amplification DDoS activities including the recent prominent attack that targeted one of the largest anti-spam organizations. Moreover, the analysis disclosed the mechanism of such DNS amplification DDoS attacks. Further, the results uncover high-speed and stealthy attempts that were never previously documented. The case study of the largest DDoS attack in history lead to a better understanding of the nature and scale of this threat and can generate inferences that could contribute in detecting, preventing, assessing, mitigating and even attributing of DNS amplification DDoS activities.
 

Facon, A., Guilley, S., Lec'Hvien, M., Schaub, A., Souissi, Y..  2018.  Detecting Cache-Timing Vulnerabilities in Post-Quantum Cryptography Algorithms. 2018 IEEE 3rd International Verification and Security Workshop (IVSW). :7-12.

When implemented on real systems, cryptographic algorithms are vulnerable to attacks observing their execution behavior, such as cache-timing attacks. Designing protected implementations must be done with knowledge and validation tools as early as possible in the development cycle. In this article we propose a methodology to assess the robustness of the candidates for the NIST post-quantum standardization project to cache-timing attacks. To this end we have developed a dedicated vulnerability research tool. It performs a static analysis with tainting propagation of sensitive variables across the source code and detects leakage patterns. We use it to assess the security of the NIST post-quantum cryptography project submissions. Our results show that more than 80% of the analyzed implementations have at least one potential flaw, and three submissions total more than 1000 reported flaws each. Finally, this comprehensive study of the competitors security allows us to identify the most frequent weaknesses amongst candidates and how they might be fixed.

Facon, Adrien, Guilley, Sylvain, Ngo, Xuan-Thuy, Perianin, Thomas.  2019.  Hardware-enabled AI for Embedded Security: A New Paradigm. 2019 3rd International Conference on Recent Advances in Signal Processing, Telecommunications Computing (SigTelCom). :80–84.

As chips become more and more connected, they are more exposed (both to network and to physical attacks). Therefore one shall ensure they enjoy a sufficient protection level. Security within chips is accordingly becoming a hot topic. Incident detection and reporting is one novel function expected from chips. In this talk, we explain why it is worthwhile to resort to Artificial Intelligence (AI) for security event handling. Drivers are the need to aggregate multiple and heterogeneous security sensors, the need to digest this information quickly to produce exploitable information, and so while maintaining a low false positive detection rate. Key features are adequate learning procedures and fast and secure classification accelerated by hardware. A challenge is to embed such security-oriented AI logic, while not compromising chip power budget and silicon area. This talk accounts for the opportunities permitted by the symbiotic encounter between chip security and AI.

Fahad, S.K. Ahammad, Yahya, Abdulsamad Ebrahim.  2018.  Inflectional Review of Deep Learning on Natural Language Processing. 2018 International Conference on Smart Computing and Electronic Enterprise (ICSCEE). :1–4.
In the age of knowledge, Natural Language Processing (NLP) express its demand by a huge range of utilization. Previously NLP was dealing with statically data. Contemporary time NLP is doing considerably with the corpus, lexicon database, pattern reorganization. Considering Deep Learning (DL) method recognize artificial Neural Network (NN) to nonlinear process, NLP tools become increasingly accurate and efficient that begin a debacle. Multi-Layer Neural Network obtaining the importance of the NLP for its capability including standard speed and resolute output. Hierarchical designs of data operate recurring processing layers to learn and with this arrangement of DL methods manage several practices. In this paper, this resumed striving to reach a review of the tools and the necessary methodology to present a clear understanding of the association of NLP and DL for truly understand in the training. Efficiency and execution both are improved in NLP by Part of speech tagging (POST), Morphological Analysis, Named Entity Recognition (NER), Semantic Role Labeling (SRL), Syntactic Parsing, and Coreference resolution. Artificial Neural Networks (ANN), Time Delay Neural Networks (TDNN), Recurrent Neural Network (RNN), Convolution Neural Networks (CNN), and Long-Short-Term-Memory (LSTM) dealings among Dense Vector (DV), Windows Approach (WA), and Multitask learning (MTL) as a characteristic of Deep Learning. After statically methods, when DL communicate the influence of NLP, the individual form of the NLP process and DL rule collaboration was started a fundamental connection.
Fahl, Sascha, Harbach, Marian, Perl, Henning, Koetter, Markus, Smith, Matthew.  2013.  Rethinking SSL Development in an Appified World. Proceedings of the 2013 ACM SIGSAC Conference on Computer &\#38; Communications Security. :49–60.
The Secure Sockets Layer (SSL) is widely used to secure data transfers on the Internet. Previous studies have shown that the state of non-browser SSL code is catastrophic across a large variety of desktop applications and libraries as well as a large selection of Android apps, leaving users vulnerable to Man-in-the-Middle attacks (MITMAs). To determine possible causes of SSL problems on all major appified platforms, we extended the analysis to the walled-garden ecosystem of iOS, analyzed software developer forums and conducted interviews with developers of vulnerable apps. Our results show that the root causes are not simply careless developers, but also limitations and issues of the current SSL development paradigm. Based on our findings, we derive a proposal to rethink the handling of SSL in the appified world and present a set of countermeasures to improve the handling of SSL using Android as a blueprint for other platforms. Our countermeasures prevent developers from willfully or accidentally breaking SSL certificate validation, offer support for extended features such as SSL Pinning and different SSL validation infrastructures, and protect users. We evaluated our solution against 13,500 popular Android apps and conducted developer interviews to judge the acceptance of our approach and found that our solution works well for all investigated apps and developers.
Fahrbach, M., Miller, G. L., Peng, R., Sawlani, S., Wang, J., Xu, S. C..  2018.  Graph Sketching against Adaptive Adversaries Applied to the Minimum Degree Algorithm. 2018 IEEE 59th Annual Symposium on Foundations of Computer Science (FOCS). :101–112.
Motivated by the study of matrix elimination orderings in combinatorial scientific computing, we utilize graph sketching and local sampling to give a data structure that provides access to approximate fill degrees of a matrix undergoing elimination in polylogarithmic time per elimination and query. We then study the problem of using this data structure in the minimum degree algorithm, which is a widely-used heuristic for producing elimination orderings for sparse matrices by repeatedly eliminating the vertex with (approximate) minimum fill degree. This leads to a nearly-linear time algorithm for generating approximate greedy minimum degree orderings. Despite extensive studies of algorithms for elimination orderings in combinatorial scientific computing, our result is the first rigorous incorporation of randomized tools in this setting, as well as the first nearly-linear time algorithm for producing elimination orderings with provable approximation guarantees. While our sketching data structure readily works in the oblivious adversary model, by repeatedly querying and greedily updating itself, it enters the adaptive adversarial model where the underlying sketches become prone to failure due to dependency issues with their internal randomness. We show how to use an additional sampling procedure to circumvent this problem and to create an independent access sequence. Our technique for decorrelating interleaved queries and updates to this randomized data structure may be of independent interest.
Fahrenkrog-Petersen, Stephan A., van der Aa, Han, Weidlich, Matthias.  2019.  PRETSA: Event Log Sanitization for Privacy-aware Process Discovery. 2019 International Conference on Process Mining (ICPM). :1—8.

Event logs that originate from information systems enable comprehensive analysis of business processes, e.g., by process model discovery. However, logs potentially contain sensitive information about individual employees involved in process execution that are only partially hidden by an obfuscation of the event data. In this paper, we therefore address the risk of privacy-disclosure attacks on event logs with pseudonymized employee information. To this end, we introduce PRETSA, a novel algorithm for event log sanitization that provides privacy guarantees in terms of k-anonymity and t-closeness. It thereby avoids disclosure of employee identities, their membership in the event log, and their characterization based on sensitive attributes, such as performance information. Through step-wise transformations of a prefix-tree representation of an event log, we maintain its high utility for discovery of a performance-annotated process model. Experiments with real-world data demonstrate that sanitization with PRETSA yields event logs of higher utility compared to methods that exploit frequency-based filtering, while providing the same privacy guarantees.

Falcon, R., Abielmona, R., Billings, S., Plachkov, A., Abbass, H..  2014.  Risk management with hard-soft data fusion in maritime domain awareness. Computational Intelligence for Security and Defense Applications (CISDA), 2014 Seventh IEEE Symposium on. :1-8.

Enhanced situational awareness is integral to risk management and response evaluation. Dynamic systems that incorporate both hard and soft data sources allow for comprehensive situational frameworks which can supplement physical models with conceptual notions of risk. The processing of widely available semi-structured textual data sources can produce soft information that is readily consumable by such a framework. In this paper, we augment the situational awareness capabilities of a recently proposed risk management framework (RMF) with the incorporation of soft data. We illustrate the beneficial role of the hard-soft data fusion in the characterization and evaluation of potential vessels in distress within Maritime Domain Awareness (MDA) scenarios. Risk features pertaining to maritime vessels are defined a priori and then quantified in real time using both hard (e.g., Automatic Identification System, Douglas Sea Scale) as well as soft (e.g., historical records of worldwide maritime incidents) data sources. A risk-aware metric to quantify the effectiveness of the hard-soft fusion process is also proposed. Though illustrated with MDA scenarios, the proposed hard-soft fusion methodology within the RMF can be readily applied to other domains.
 

Falk, E., Repcek, S., Fiz, B., Hommes, S., State, R., Sasnauskas, R..  2017.  VSOC - A Virtual Security Operating Center. GLOBECOM 2017 - 2017 IEEE Global Communications Conference. :1–6.

Security in virtualised environments is becoming increasingly important for institutions, not only for a firm's own on-site servers and network but also for data and sites that are hosted in the cloud. Today, security is either handled globally by the cloud provider, or each customer needs to invest in its own security infrastructure. This paper proposes a Virtual Security Operation Center (VSOC) that allows to collect, analyse and visualize security related data from multiple sources. For instance, a user can forward log data from its firewalls, applications and routers in order to check for anomalies and other suspicious activities. The security analytics provided by the VSOC are comparable to those of commercial security incident and event management (SIEM) solutions, but are deployed as a cloud-based solution with the additional benefit of using big data processing tools to handle large volumes of data. This allows us to detect more complex attacks that cannot be detected with todays signature-based (i.e. rules) SIEM solutions.

Fan Yang, University of Illinois at Urbana-Champaign, Santiago Escobar, Universidad Politécnica de Valencia, Spain, Catherine Meadows, Naval Research Laboratory, Jose Meseguer, University of Illinois at Urbana-Champaign, Paliath Narendran, University at Albany-SUNY.  2014.  Theories for Homomorphic Encryption, Unification and the Finite Variant Property. 16th International Symposium on Principles and Practice of Declarative Programming (PPDP 2014).

Recent advances in the automated analysis of cryptographic protocols have aroused new interest in the practical application of unification modulo theories, especially theories that describe the algebraic properties of cryptosystems. However, this application requires unification algorithms that can be easily implemented and easily extended to combinations of different theories of interest. In practice this has meant that most tools use a version of a technique known as variant unification. This requires, among other things, that the theory be decomposable into a set of axioms B and a set of rewrite rules R such that R has the finite variant property with respect to B. Most theories that arise in cryptographic protocols have decompositions suitable for variant unification, but there is one major exception: the theory that describes encryption that is homomorphic over an Abelian group.

In this paper we address this problem by studying various approximations of homomorphic encryption over an Abelian group. We construct a hierarchy of increasingly richer theories, taking advantage of new results that allow us to automatically verify that their decompositions have the finite variant property. This new verification procedure also allows us to construct a rough metric of the complexity of a theory with respect to variant unification, or variant complexity. We specify different versions of protocols using the different theories, and analyze them in the Maude-NPA cryptographic protocol analysis tool to assess their behavior. This gives us greater understanding of how the theories behave in actual application, and suggests possible techniques for improving performance.

Fan, Chuchu, Mitra, Sayan.  2019.  Data-Driven Safety Verification of Complex Cyber-Physical Systems. Design Automation of Cyber-Physical Systems. :107–142.

Data-driven verification methods utilize execution data together with models for establishing safety requirements. These are often the only tools available for analyzing complex, nonlinear cyber-physical systems, for which purely model-based analysis is currently infeasible. In this chapter, we outline the key concepts and algorithmic approaches for data-driven verification and discuss the guarantees they provide. We introduce some of the software tools that embody these ideas and present several practical case studies demonstrating their application in safety analysis of autonomous vehicles, advanced driver assist systems (ADAS), satellite control, and engine control systems.

Fan, Chun-I, Chen, I-Te, Cheng, Chen-Kai, Huang, Jheng-Jia, Chen, Wen-Tsuen.  2018.  FTP-NDN: File Transfer Protocol Based on Re-Encryption for Named Data Network Supporting Nondesignated Receivers. IEEE Systems Journal. 12:473–484.
Due to users' network flow requirement and usage amount nowadays, TCP/IP networks may face various problems. For one, users of video services may access simultaneously the same content, which leads to the host incurring extra costs. Second, although nearby nodes may have the file that a user wants to access, the user cannot directly verify the file itself. This issue will lead the user to connect to a remote host rather than the nearby nodes and causes the network traffic to greatly increase. Therefore, the named data network (NDN), which is based on data itself, was brought about to deal with the aforementioned problems. In NDN, all users can access a file from the nearby nodes, and they can directly verify the file themselves rather than the specific host who holds the file. However, NDN still has no complete standard and secure file transfer protocol to support the ciphertext transmission and the problem of the unknown potential receivers. The straightforward solution is that a sender uses the receiver's public key to encrypt a file before she/he sends the file to NDN nodes. However, it will limit the behavior of users and incur significant storage costs of NDN nodes. This paper presents a complete secure file transfer protocol, which combines the data re-encryption, satisfies the requirement of secure ciphertext transmission, solves the problem of the unknown potential receivers, and saves the significant storage costs of NDN nodes. The proposed protocol is the first one that achieves data confidentiality and solves the problem of the unknown potential receivers in NDN. Finally, we also provide formal security models and proofs for the proposed FTP-NDN.
Fan, H., Ji, X. y, Chen, S..  2015.  A hybrid algorithm for reactive power optimization based on bi-level programming. International Conference on Renewable Power Generation (RPG 2015). :1–4.

This paper established a bi-level programming model for reactive power optimization, considering the feature of the grid voltage-reactive power control. The targets of upper-level and lower-level are minimization of grid loss and voltage deviation, respectively. According to the differences of two level, such as different variables, different solution space, primal-dual interior point algorithm is suggested to be used in upper-level, which takes continuous variables in account such as active power source and reactive power source. Upper-level model guaranteed the sufficient of the reactive power in power system. And then in lower-level the discrete variables such as taps are optimized by random forests algorithm (RFA), which regulate the voltage in a feasible range. Finally, a case study illustrated the speediness and robustness of this method.

Fan, Jiasheng, Chen, Fangjiong, Guan, Quansheng, Ji, Fei, Yu, Hua.  2016.  On the Probability of Finding a Receiver in an Ellipsoid Neighborhood of a Sender in 3D Random UANs. Proceedings of the 11th ACM International Conference on Underwater Networks & Systems. :51:1–51:2.
We consider 3-dimensional(3D) underwater random network (UAN) where the nodes are uniformly distributed in a cuboid region. Then we derive the closed-form probability of finding a receiver in an ellipsoid neighborhood of an arbitrary sender. Computer simulation shows that the analytical result is generally consistent with the simulated result.
Fan, Renshi, Du, Gaoming, Xu, Pengfei, Li, Zhenmin, Song, Yukun, Zhang, Duoli.  2019.  An Adaptive Routing Scheme Based on Q-learning and Real-time Traffic Monitoring for Network-on-Chip. 2019 IEEE 13th International Conference on Anti-counterfeiting, Security, and Identification (ASID). :244—248.
In the Network on Chip (NoC), performance optimization has always been a research focus. Compared with the static routing scheme, dynamical routing schemes can better reduce the data of packet transmission latency under network congestion. In this paper, we propose a dynamical Q-learning routing approach with real-time monitoring of NoC. Firstly, we design a real-time monitoring scheme and the corresponding circuits to record the status of traffic congestion for NoC. Secondly, we propose a novel method of Q-learning. This method finds an optimal path based on the lowest traffic congestion. Finally, we dynamically redistribute network tasks to increase the packet transmission speed and balance the traffic load. Compared with the C-XY routing and DyXY routing, our method achieved improvement in terms of 25.6%-49.5% and 22.9%-43.8%.
Fan, Shuqin, Wang, Wenbo, Cheng, Qingfeng.  2016.  Attacking OpenSSL Implementation of ECDSA with a Few Signatures. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. :1505–1515.

In this work, we give a lattice attack on the ECDSA implementation in the latest version of OpenSSL, which implement the scalar multiplication by windowed Non-Adjacent Form method. We propose a totally different but more efficient method of extracting and utilizing information from the side-channel results, remarkably improving the previous attacks. First, we develop a new efficient method, which can extract almost all information from the side-channel results, obtaining 105.8 bits of information per signature on average for 256-bit ECDSA. Then in order to make the utmost of our extracted information, we translate the problem of recovering secret key to the Extended Hidden Number Problem, which can be solved by lattice reduction algorithms. Finally, we introduce the methods of elimination, merging, most significant digit recovering and enumeration to improve the attack. Our attack is mounted to the \series secp256k1\ curve, and the result shows that only 4 signatures would be enough to recover the secret key if the Flush+Reload attack is implemented perfectly without any error,which is much better than the best known result needing at least 13 signatures.

Fan, Wenjun, Ziembicka, Joanna, de Lemos, Rogério, Chadwick, David, Di Cerbo, Francesco, Sajjad, Ali, Wang, Xiao-Si, Herwono, Ian.  2019.  Enabling Privacy-Preserving Sharing of Cyber Threat Information in the Cloud. 2019 6th IEEE International Conference on Cyber Security and Cloud Computing (CSCloud)/ 2019 5th IEEE International Conference on Edge Computing and Scalable Cloud (EdgeCom). :74–80.
Network threats often come from multiple sources and affect a variety of domains. Collaborative sharing and analysis of Cyber Threat Information (CTI) can greatly improve the prediction and prevention of cyber-attacks. However, CTI data containing sensitive and confidential information can cause privacy exposure and disclose security risks, which will deter organisations from sharing their CTI data. To address these concerns, the consortium of the EU H2020 project entitled Collaborative and Confidential Information Sharing and Analysis for Cyber Protection (C3ISP) has designed and implemented a framework (i.e. C3ISP Framework) as a service for cyber threat management. This paper focuses on the design and development of an API Gateway, which provides a bridge between end-users and their data sources, and the C3ISP Framework. It facilitates end-users to retrieve their CTI data, regulate data sharing agreements in order to sanitise the data, share the data with privacy-preserving means, and invoke collaborative analysis for attack prediction and prevention. In this paper, we report on the implementation of the API Gateway and experiments performed. The results of these experiments show the efficiency of our gateway design, and the benefits for the end-users who use it to access the C3ISP Framework.
Fan, Xiaokang, Sui, Yulei, Liao, Xiangke, Xue, Jingling.  2017.  Boosting the Precision of Virtual Call Integrity Protection with Partial Pointer Analysis for C++. Proceedings of the 26th ACM SIGSOFT International Symposium on Software Testing and Analysis. :329–340.

We present, VIP, an approach to boosting the precision of Virtual call Integrity Protection for large-scale real-world C++ programs (e.g., Chrome) by using pointer analysis for the first time. VIP introduces two new techniques: (1) a sound and scalable partial pointer analysis for discovering statically the sets of legitimate targets at virtual callsites from separately compiled C++ modules and (2) a lightweight instrumentation technique for performing (virtual call) integrity checks at runtime. VIP raises the bar against vtable hijacking attacks by providing stronger security guarantees than the CHA-based approach with comparable performance overhead. VIP is implemented in LLVM-3.8.0 and evaluated using SPEC programs and Chrome. Statically, VIP protects virtual calls more effectively than CHA by significantly reducing the sets of legitimate targets permitted at 20.3% of the virtual callsites per program, on average. Dynamically, VIP incurs an average (maximum) instrumentation overhead of 0.7% (3.3%), making it practically deployable as part of a compiler tool chain.

Fan, Xinxin, Chai, Qi.  2018.  Roll-DPoS: A Randomized Delegated Proof of Stake Scheme for Scalable Blockchain-Based Internet of Things Systems. Proceedings of the 15th EAI International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services. :482–484.

Delegated Proof-of-Stake (DPoS) is an efficient, decentralized, and flexible consensus framework available in the blockchain industry. However, applying DPoS to the decentralized Internet of Things (IoT) applications is quite challenging due to the nature of IoT systems such as large-scale deployments and huge amount of data. To address the unique challenge for IoT based blockchain applications, we present Roll-DPoS, a randomized delegated proof of stake algorithm. Roll-DPoS inherits all the advantages of the original DPoS consensus framework and further enhances its capability in terms of decentralization as well as extensibility to complex blockchain architectures. A number of modern cryptographic techniques have been utilized to optimize the consensus process with respect to the computational and communication overhead.