Visible to the public Biblio

Found 179 results

Filters: First Letter Of Title is L  [Clear All Filters]
A B C D E F G H I J K [L] M N O P Q R S T U V W X Y Z   [Show ALL]
L
Jia, J., Chen, L..  2017.  (L, m, d) \#x2014; Anonymity : A Resisting Similarity Attack Model for Multiple Sensitive Attributes. 2017 IEEE 2nd Information Technology, Networking, Electronic and Automation Control Conference (ITNEC). :756–760.

Preserving privacy is extremely important in data publishing. The existing privacy-preserving models are mostly oriented to single sensitive attribute, can not be applied to multiple sensitive attributes situation. Moreover, they do not consider the semantic similarity between sensitive attribute values, and may be vulnerable to similarity attack. In this paper, we propose a (l, m, d)-anonymity model for multiple sensitive attributes similarity attack, where m is the dimension of the sensitive attributes. This model uses the semantic hierarchical tree to analyze and compute the semantic dissimilarity between sensitive attribute values, and each equivalence class must exist at least l sensitive attribute values that satisfy d-different on each dimension sensitive attribute. Meanwhile, in order to make the published data highly available, our model adopts the distance-based measurement method to divide the equivalence class. We carry out extensive experiments to certify the (1, m, d)-anonymity model can significantly reduce the probability of sensitive information leakage and protect individual privacy more effectively.

Haller, Philipp, Loiko, Alex.  2016.  LaCasa: Lightweight Affinity and Object Capabilities in Scala. Proceedings of the 2016 ACM SIGPLAN International Conference on Object-Oriented Programming, Systems, Languages, and Applications. :272–291.

Aliasing is a known source of challenges in the context of imperative object-oriented languages, which have led to important advances in type systems for aliasing control. However, their large-scale adoption has turned out to be a surprisingly difficult challenge. While new language designs show promise, they do not address the need of aliasing control in existing languages. This paper presents a new approach to isolation and uniqueness in an existing, widely-used language, Scala. The approach is unique in the way it addresses some of the most important obstacles to the adoption of type system extensions for aliasing control. First, adaptation of existing code requires only a minimal set of annotations. Only a single bit of information is required per class. Surprisingly, the paper shows that this information can be provided by the object-capability discipline, widely-used in program security. We formalize our approach as a type system and prove key soundness theorems. The type system is implemented for the full Scala language, providing, for the first time, a sound integration with Scala's local type inference. Finally, we empirically evaluate the conformity of existing Scala open-source code on a corpus of over 75,000 LOC.

Honig, William L., Noda, Natsuko, Takada, Shingo.  2016.  Lack of Attention to Singular (or Atomic) Requirements Despite Benefits for Quality, Metrics and Management. SIGSOFT Softw. Eng. Notes. 41:1–5.

There are seemingly many advantages to being able to identify, document, test, and trace single or "atomic" requirements. Why then has there been little attention to the topic and no widely used definition or process on how to define atomic requirements? Definitions of requirements and standards focus on user needs, system capabilities or functions; some definitions include making individual requirements singular or without the use of conjunctions. In a few cases there has been a description of atomic system events or requirements. This work is surveyed here although there is no well accepted and used best practice for generating atomic requirements. Due to their importance in software engineering, quality and metrics for requirements have received considerable attention. In the seminal paper on software requirements quality, Davis et al. proposed specific metrics including the "unambiguous quality factor" and the "verifiable quality factor"; these and other metrics work best with a clearly enumerable list of single requirements. Atomic requirements are defined here as a natural language statement that completely describes a single system function, feature, need, or capability, including all information, details, limits, and characteristics. A typical user login screen is used as an example of an atomic requirement which can include both functional and nonfunctional requirements. Individual atomic requirements are supported by a system glossary, references to applicable industry standards, mock ups of the user interface, etc. One way to identify such atomic requirements is from use case or system event analysis. This definition of atomic requirements is still a work in progress and offered to prompt discussion. Atomic requirements allow clear naming or numbering of requirements for traceability, change management, and importance ranking. Further, atomic requirements defined in this manner are suitable for rapid implementation approaches (implementing one requirement at a time), enable good test planning (testing can clearly indicate pass or fail of the whole requirement), and offer other management advantages in project control.

Bateman, Scott, Gutwin, Carl.  2016.  (The Lack of) Privacy Concerns with Sharing Web Activity at Work and the Implications for Collaborative Search. Proceedings of the 2016 ACM on Conference on Human Information Interaction and Retrieval. :43–52.
Collaborative information seeking frequently occurs in an opportunistic and loosely-coupled fashion that is supported by awareness of others' activities on the web. Automatically sharing traces of information about web activity could substantially improve these collaborative information tasks, but conventional wisdom suggests that people are very reluctant to share information about web usage. Because work settings have different rules and practices about privacy, we carried out the first systematic study of people's privacy concerns about sharing web activity within workgroups. To provide a better understanding of privacy concerns about sharing web activity at work, we conducted a two-week diary study with 18 participants. Our study system asked participants to report on their search tasks and privacy concerns. Surprisingly, our results showed that people have little concern about sharing the majority of their activities with their work colleagues, and had even fewer concerns with sharing work-related activities. Our results provide new insights into the possibilities of sharing web activities within workgroups, and provide evidence that tools based on automatic sharing of awareness information can be feasible.
Li, Gaochao, Xu, Xiaolin, Li, Qingshan.  2015.  LADP: A lightweight authentication and delegation protocol for RFID tags. 2015 Seventh International Conference on Ubiquitous and Future Networks. :860–865.

In recent years, the issues of RFID security and privacy are a concern. To prevent the tag is cloned, physically unclonable function (PUF) has been proposed. In each PUF-enabled tag, the responses of PUF depend on the structural disorder that cannot be cloned or reproduced. Therefore, many responses need to store in the database in the initial phase of many authentication protocols. In the supply chain, the owners of the PUF-enabled Tags change frequently, many authentication and delegation protocols are proposed. In this paper, a new lightweight authentication and delegation protocol for RFID tags (LADP) is proposed. The new protocol does not require pre-stored many PUF's responses in the database. When the authentication messages are exchanged, the next response of PUF is passed to the reader secretly. In the transfer process of ownership, the new owner will not get the information of the interaction of the original owner. It can protect the privacy of the original owner. Meanwhile, the original owner cannot continue to access or track the tag. It can protect the privacy of the new owner. In terms of efficiency, the new protocol replaces the pseudorandom number generator with the randomness of PUF that suitable for use in the low-cost tags. The cost of computation and communication are reduced and superior to other protocols.

Bours, P., Brahmanpally, S..  2017.  Language Dependent Challenge-Based Keystroke Dynamics. 2017 International Carnahan Conference on Security Technology (ICCST). :1–6.

Keystroke Dynamics can be used as an unobtrusive method to enhance password authentication, by checking the typing rhythm of the user. Fixed passwords will give an attacker the possibility to try to learn to mimic the typing behaviour of a victim. In this paper we will investigate the performance of a keystroke dynamic (KD) system when the users have to type given (English) words. Under the assumption that it is easy to type words in your native language and difficult in a foreign language will we also test the performance of such a challenge-based KD system when the challenges are not common English words, but words in the native language of the user. We collected data from participants with 6 different native language backgrounds and had them type random 8-12 character words in each of the 6 languages. The participants also typed random English words and random French words. English was assumed to be a language familiar to all participants, while French was not a native language to any participant and most likely most participants were not fluent in French. Analysis showed that using language dependent words gave a better performance of the challenge-based KD compared to an all English challenge-based system. When using words in a native language, then the performance of the participants with their mother-tongue equal to that native language had a similar performance compared to the all English challenge-based system, but the non-native speakers had an FMR that was significantly lower than the native language speakers. We found that native Telugu speakers had an FMR of less than 1% when writing Spanish or Slovak words. We also found that duration features were best to recognize genuine users, but latency features performed best to recognize non-native impostor users.

Ko, Wilson K.H., Wu, Yan, Tee, Keng Peng.  2016.  LAP: A Human-in-the-loop Adaptation Approach for Industrial Robots. Proceedings of the Fourth International Conference on Human Agent Interaction. :313–319.
In the last few years, a shift from mass production to mass customisation is observed in the industry. Easily reprogrammable robots that can perform a wide variety of tasks are desired to keep up with the trend of mass customisation while saving costs and development time. Learning by Demonstration (LfD) is an easy way to program the robots in an intuitive manner and provides a solution to this problem. In this work, we discuss and evaluate LAP, a three-stage LfD method that conforms to the criteria for the high-mix-low-volume (HMLV) industrial settings. The algorithm learns a trajectory in the task space after which small segments can be adapted on-the-fly by using a human-in-the-loop approach. The human operator acts as a high-level adaptation, correction and evaluation mechanism to guide the robot. This way, no sensors or complex feedback algorithms are needed to improve robot behaviour, so errors and inaccuracies induced by these subsystems are avoided. After the system performs at a satisfactory level after the adaptation, the operator will be removed from the loop. The robot will then proceed in a feed-forward fashion to optimise for speed. We demonstrate this method by simulating an industrial painting application. A KUKA LBR iiwa is taught how to draw an eight figure which is reshaped by the operator during adaptation.
McDuff, D., Soleymani, M..  2017.  Large-scale Affective Content Analysis: Combining Media Content Features and Facial Reactions. 2017 12th IEEE International Conference on Automatic Face Gesture Recognition (FG 2017). :339–345.
We present a novel multimodal fusion model for affective content analysis, combining visual, audio and deep visual-sentiment descriptors from the media content with automated facial action measurements from naturalistic responses to the media. We collected a dataset of 48,867 facial responses to 384 media clips and extracted a rich feature set from the facial responses and media content. The stimulus videos were validated to be informative, inspiring, persuasive, sentimental or amusing. By combining the features, we were able to obtain a classification accuracy of 63% (weighted F1-score: 0.62) for a five-class task. This was a significant improvement over using the media content features alone. By analyzing the feature sets independently, we found that states of informed and persuaded were difficult to differentiate from facial responses alone due to the presence of similar sets of action units in each state (AU 2 occurring frequently in both cases). Facial actions were beneficial in differentiating between amused and informed states whereas media content features alone performed less well due to similarities in the visual and audio make up of the content. We highlight examples of content and reactions from each class. This is the first affective content analysis based on reactions of 10,000s of people.
Sudhodanan, A., Carbone, R., Compagna, L., Dolgin, N., Armando, A., Morelli, U..  2017.  Large-Scale Analysis Detection of Authentication Cross-Site Request Forgeries. 2017 IEEE European Symposium on Security and Privacy (EuroS P). :350–365.
Cross-Site Request Forgery (CSRF) attacks are one of the critical threats to web applications. In this paper, we focus on CSRF attacks targeting web sites' authentication and identity management functionalities. We will refer to them collectively as Authentication CSRF (Auth-CSRF in short). We started by collecting several Auth-CSRF attacks reported in the literature, then analyzed their underlying strategies and identified 7 security testing strategies that can help a manual tester uncover vulnerabilities enabling Auth-CSRF. In order to check the effectiveness of our testing strategies and to estimate the incidence of Auth-CSRF, we conducted an experimental analysis considering 300 web sites belonging to 3 different rank ranges of the Alexa global top 1500. The results of our experiments are alarming: out of the 300 web sites we considered, 133 qualified for conducting our experiments and 90 of these suffered from at least one vulnerability enabling Auth-CSRF (i.e. 68%). We further generalized our testing strategies, enhanced them with the knowledge we acquired during our experiments and implemented them as an extension (namely CSRF-checker) to the open-source penetration testing tool OWASP ZAP. With the help of CSRFchecker, we tested 132 additional web sites (again from the Alexa global top 1500) and identified 95 vulnerable ones (i.e. 72%). Our findings include serious vulnerabilities among the web sites of Microsoft, Google, eBay etc. Finally, we responsibly disclosed our findings to the affected vendors.
Guo, Qi, Song, Yang.  2016.  Large-Scale Analysis of Viewing Behavior: Towards Measuring Satisfaction with Mobile Proactive Systems. Proceedings of the 25th ACM International on Conference on Information and Knowledge Management. :579–588.

Recently, proactive systems such as Google Now and Microsoft Cortana have become increasingly popular in reforming the way users access information on mobile devices. In these systems, relevant content is presented to users based on their context without a query in the form of information cards that do not require a click to satisfy the users. As a result, prior approaches based on clicks cannot provide reliable measurements of user satisfaction with such systems. It is also unclear how much of the previous findings regarding good abandonment with reactive Web searches can be applied to these proactive systems due to the intrinsic difference in user intent, the greater variety of content types and their presentations. In this paper, we present the first large-scale analysis of viewing behavior based on the viewport (the visible fraction of a Web page) of the mobile devices, towards measuring user satisfaction with the information cards of the mobile proactive systems. In particular, we identified and analyzed a variety of factors that may influence the viewing behavior, including biases from ranking positions, the types and attributes of the information cards, and the touch interactions with the mobile devices. We show that by modeling the various factors we can better measure user satisfaction with the mobile proactive systems, enabling stronger statistical power in large-scale online A/B testing.

Scheitle, Q., Gasser, O., Rouhi, M., Carle, G..  2017.  Large-Scale Classification of IPv6-IPv4 Siblings with Variable Clock Skew. 2017 Network Traffic Measurement and Analysis Conference (TMA). :1–9.

Linking the growing IPv6 deployment to existing IPv4 addresses is an interesting field of research, be it for network forensics, structural analysis, or reconnaissance. In this work, we focus on classifying pairs of server IPv6 and IPv4 addresses as siblings, i.e., running on the same machine. Our methodology leverages active measurements of TCP timestamps and other network characteristics, which we measure against a diverse ground truth of 682 hosts. We define and extract a set of features, including estimation of variable (opposed to constant) remote clock skew. On these features, we train a manually crafted algorithm as well as a machine-learned decision tree. By conducting several measurement runs and training in cross-validation rounds, we aim to create models that generalize well and do not overfit our training data. We find both models to exceed 99% precision in train and test performance. We validate scalability by classifying 149k siblings in a large-scale measurement of 371k sibling candidates. We argue that this methodology, thoroughly cross-validated and likely to generalize well, can aid comparative studies of IPv6 and IPv4 behavior in the Internet. Striving for applicability and replicability, we release ready-to-use source code and raw data from our study.

Mihaylov, Todor, Balchev, Daniel, Kiprov, Yasen, Koychev, Ivan, Nakov, Preslav.  2017.  Large-Scale Goodness Polarity Lexicons for Community Question Answering. Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval. :1185–1188.

We transfer a key idea from the field of sentiment analysis to a new domain: community question answering (cQA). The cQA task we are interested in is the following: given a question and a thread of comments, we want to re-rank the comments, so that the ones that are good answers to the question would be ranked higher than the bad ones. We notice that good vs. bad comments use specific vocabulary and that one can often predict the goodness/badness of a comment even ignoring the question, based on the comment contents only. This leads us to the idea to build a good/bad polarity lexicon as an analogy to the positive/negative sentiment polarity lexicons, commonly used in sentiment analysis. In particular, we use pointwise mutual information in order to build large-scale goodness polarity lexicons in a semi-supervised manner starting with a small number of initial seeds. The evaluation results show an improvement of 0.7 MAP points absolute over a very strong baseline, and state-of-the art performance on SemEval-2016 Task 3.

Li, Bo, Roundy, Kevin, Gates, Chris, Vorobeychik, Yevgeniy.  2017.  Large-Scale Identification of Malicious Singleton Files. Proceedings of the Seventh ACM on Conference on Data and Application Security and Privacy. :227–238.

We study a dataset of billions of program binary files that appeared on 100 million computers over the course of 12 months, discovering that 94% of these files were present on a single machine. Though malware polymorphism is one cause for the large number of singleton files, additional factors also contribute to polymorphism, given that the ratio of benign to malicious singleton files is 80:1. The huge number of benign singletons makes it challenging to reliably identify the minority of malicious singletons. We present a large-scale study of the properties, characteristics, and distribution of benign and malicious singleton files. We leverage the insights from this study to build a classifier based purely on static features to identify 92% of the remaining malicious singletons at a 1.4% percent false positive rate, despite heavy use of obfuscation and packing techniques by most malicious singleton files that we make no attempt to de-obfuscate. Finally, we demonstrate robustness of our classifier to important classes of automated evasion attacks.

Zhu, Yi, Liu, Sen, Newsam, Shawn.  2017.  Large-Scale Mapping of Human Activity Using Geo-Tagged Videos. Proceedings of the 25th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems. :68:1–68:4.

This paper is the first work to perform spatio-temporal mapping of human activity using the visual content of geo-tagged videos. We utilize a recent deep-learning based video analysis framework, termed hidden two-stream networks, to recognize a range of activities in YouTube videos. This framework is efficient and can run in real time or faster which is important for recognizing events as they occur in streaming video or for reducing latency in analyzing already captured video. This is, in turn, important for using video in smart-city applications. We perform a series of experiments to show our approach is able to map activities both spatially and temporally.

Fabian, Benjamin, Ermakova, Tatiana, Lentz, Tino.  2017.  Large-Scale Readability Analysis of Privacy Policies. Proceedings of the International Conference on Web Intelligence. :18–25.

Online privacy policies notify users of a Website how their personal information is collected, processed and stored. Against the background of rising privacy concerns, privacy policies seem to represent an influential instrument for increasing customer trust and loyalty. However, in practice, consumers seem to actually read privacy policies only in rare cases, possibly reflecting the common assumption stating that policies are hard to comprehend. By designing and implementing an automated extraction and readability analysis toolset that embodies a diversity of established readability measures, we present the first large-scale study that provides current empirical evidence on the readability of nearly 50,000 privacy policies of popular English-speaking Websites. The results empirically confirm that on average, current privacy policies are still hard to read. Furthermore, this study presents new theoretical insights for readability research, in particular, to what extent practical readability measures are correlated. Specifically, it shows the redundancy of several well-established readability metrics such as SMOG, RIX, LIX, GFI, FKG, ARI, and FRES, thus easing future choice making processes and comparisons between readability studies, as well as calling for research towards a readability measures framework. Moreover, a more sophisticated privacy policy extractor and analyzer as well as a solid policy text corpus for further research are provided.

McCulley, Shane, Roussev, Vassil.  2018.  Latent Typing Biometrics in Online Collaboration Services. Proceedings of the 34th Annual Computer Security Applications Conference. :66–76.

The use of typing biometrics—the characteristic typing patterns of individual keyboard users—has been studied extensively in the context of enhancing multi-factor authentication services. The key starting point for such work has been the collection of high-fidelity local timing data, and the key (implicit) security assumption has been that such biometrics could not be obtained by other means. We show that the latter assumption to be false, and that it is entirely feasible to obtain useful typing biometric signatures from third-party timing logs. Specifically, we show that the logs produced by realtime collaboration services during their normal operation are of sufficient fidelity to successfully impersonate a user using remote data only. Since the logs are routinely shared as a byproduct of the services' operation, this creates an entirely new avenue of attack that few users would be aware of. As a proof of concept, we construct successful biometric attacks using only the log-based structure (complete editing history) of a shared Google Docs, or Zoho Writer, document which is readily available to all contributing parties. Using the largest available public data set of typing biometrics, we are able to create successful forgeries 100% of the time against a commercial biometric service. Our results suggest that typing biometrics are not robust against practical forgeries, and should not be given the same weight as other authentication factors. Another important implication is that the routine collection of detailed timing logs by various online services also inherently (and implicitly) contains biometrics. This not only raises obvious privacy concerns, but may also undermine the effectiveness of network anonymization solutions, such as ToR, when used with existing services.

Härtig, H., Roitzsch, M., Weinhold, C., Lackorzynski, A..  2017.  Lateral Thinking for Trustworthy Apps. 2017 IEEE 37th International Conference on Distributed Computing Systems (ICDCS). :1890–1899.

The growing computerization of critical infrastructure as well as the pervasiveness of computing in everyday life has led to increased interest in secure application development. We observe a flurry of new security technologies like ARM TrustZone and Intel SGX, but a lack of a corresponding architectural vision. We are convinced that point solutions are not sufficient to address the overall challenge of secure system design. In this paper, we outline our take on a trusted component ecosystem of small individual building blocks with strong isolation. In our view, applications should no longer be designed as massive stacks of vertically layered frameworks, but instead as horizontal aggregates of mutually isolated components that collaborate across machine boundaries to provide a service. Lateral thinking is needed to make secure systems going forward.

Yan-Tao, Zhong.  2018.  Lattice Based Authenticated Key Exchange with Universally Composable Security. 2018 International Conference on Networking and Network Applications (NaNA). :86–90.

The Internet of things (IoT) has experienced rapid development these years, while its security and privacy remains a major challenge. One of the main security goals for the IoT is to build secure and authenticated channels between IoT nodes. A common way widely used to achieve this goal is using authenticated key exchange protocol. However, with the increasing progress of quantum computation, most authenticated key exchange protocols nowadays are threatened by the rise of quantum computers. In this study, we address this problem by using ring-SIS based KEM and hash function to construct an authenticated key exchange scheme so that we base the scheme on lattice based hard problems believed to be secure even with quantum attacks. We also prove the security of universal composability of our scheme. The scheme hence can keep security while runs in complicated environment.

Denning, Dorothy E..  1976.  A Lattice Model of Secure Information Flow. Commun. ACM. 19:236–243.
This paper investigates mechanisms that guarantee secure information flow in a computer system. These mechanisms are examined within a mathematical framework suitable for formulating the requirements of secure information flow among security classes. The central component of the model is a lattice structure derived from the security classes and justified by the semantics of information flow. The lattice properties permit concise formulations of the security requirements of different existing systems and facilitate the construction of mechanisms that enforce security. The model provides a unifying view of all systems that restrict information flow, enables a classification of them according to security objectives, and suggests some new approaches. It also leads to the construction of automatic program certification mechanisms for verifying the secure flow of information through a program.
Denning, Dorothy E..  1976.  A Lattice Model of Secure Information Flow. Commun. ACM. 19:236–243.

This paper investigates mechanisms that guarantee secure information flow in a computer system. These mechanisms are examined within a mathematical framework suitable for formulating the requirements of secure information flow among security classes. The central component of the model is a lattice structure derived from the security classes and justified by the semantics of information flow. The lattice properties permit concise formulations of the security requirements of different existing systems and facilitate the construction of mechanisms that enforce security. The model provides a unifying view of all systems that restrict information flow, enables a classification of them according to security objectives, and suggests some new approaches. It also leads to the construction of automatic program certification mechanisms for verifying the secure flow of information through a program.

This article was identified by the SoS Best Scientific Cybersecurity Paper Competition Distinguished Experts as a Science of Security Significant Paper.

The Science of Security Paper Competition was developed to recognize and honor recently published papers that advance the science of cybersecurity. During the development of the competition, members of the Distinguished Experts group suggested that listing papers that made outstanding contributions, empirical or theoretical, to the science of cybersecurity in earlier years would also benefit the research community.

Howe, J., Moore, C., O'Neill, M., Regazzoni, F., Güneysu, T., Beeden, K..  2016.  Lattice-based Encryption Over Standard Lattices In Hardware. Proceedings of the 53rd Annual Design Automation Conference. :162:1–162:6.

Lattice-based cryptography has gained credence recently as a replacement for current public-key cryptosystems, due to its quantum-resilience, versatility, and relatively low key sizes. To date, encryption based on the learning with errors (LWE) problem has only been investigated from an ideal lattice standpoint, due to its computation and size efficiencies. However, a thorough investigation of standard lattices in practice has yet to be considered. Standard lattices may be preferred to ideal lattices due to their stronger security assumptions and less restrictive parameter selection process. In this paper, an area-optimised hardware architecture of a standard lattice-based cryptographic scheme is proposed. The design is implemented on a FPGA and it is found that both encryption and decryption fit comfortably on a Spartan-6 FPGA. This is the first hardware architecture for standard lattice-based cryptography reported in the literature to date, and thus is a benchmark for future implementations. Additionally, a revised discrete Gaussian sampler is proposed which is the fastest of its type to date, and also is the first to investigate the cost savings of implementing with λ/2-bits of precision. Performance results are promising compared to the hardware designs of the equivalent ring-LWE scheme, which in addition to providing stronger security proofs; generate 1272 encryptions per second and 4395 decryptions per second.

del Pino, Rafael, Lyubashevsky, Vadim, Seiler, Gregor.  2018.  Lattice-Based Group Signatures and Zero-Knowledge Proofs of Automorphism Stability. Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security. :574–591.

We present a group signature scheme, based on the hardness of lattice problems, whose outputs are more than an order of magnitude smaller than the currently most efficient schemes in the literature. Since lattice-based schemes are also usually non-trivial to efficiently implement, we additionally provide the first experimental implementation of lattice-based group signatures demonstrating that our construction is indeed practical – all operations take less than half a second on a standard laptop. A key component of our construction is a new zero-knowledge proof system for proving that a committed value belongs to a particular set of small size. The sets for which our proofs are applicable are exactly those that contain elements that remain stable under Galois automorphisms of the underlying cyclotomic number field of our lattice-based protocol. We believe that these proofs will find applications in other settings as well. The motivation of the new zero-knowledge proof in our construction is to allow the efficient use of the selectively-secure signature scheme (i.e. a signature scheme in which the adversary declares the forgery message before seeing the public key) of Agrawal et al. (Eurocrypt 2010) in constructions of lattice-based group signatures and other privacy protocols. For selectively-secure schemes to be meaningfully converted to standard signature schemes, it is crucial that the size of the message space is not too large. Using our zero-knowledge proofs, we can strategically pick small sets for which we can provide efficient zero-knowledge proofs of membership.

Gennaro, Rosario, Minelli, Michele, Nitulescu, Anca, Orrù, Michele.  2018.  Lattice-Based Zk-SNARKs from Square Span Programs. Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security. :556-573.

Zero-knowledge SNARKs (zk-SNARKs) are non-interactive proof systems with short and efficiently verifiable proofs. They elegantly resolve the juxtaposition of individual privacy and public trust, by providing an efficient way of demonstrating knowledge of secret information without actually revealing it. To this day, zk-SNARKs are being used for delegating computation, electronic cryptocurrencies, and anonymous credentials. However, all current SNARKs implementations rely on pre-quantum assumptions and, for this reason, are not expected to withstand cryptanalitic efforts over the next few decades. In this work, we introduce the first designated-verifier zk-SNARK based on lattice assumptions, which are believed to be post-quantum secure. We provide a generalization in the spirit of Gennaro et al. (Eurocrypt'13) to the SNARK of Danezis et al. (Asiacrypt'14) that is based on Square Span Programs (SSPs) and relies on weaker computational assumptions. We focus on designated-verifier proofs and propose a protocol in which a proof consists of just 5 LWE encodings. We provide a concrete choice of parameters as well as extensive benchmarks on a C implementation, showing that our construction is practically instantiable.

Ning, W., Zhi-Jun, L..  2018.  A Layer-Built Method to the Relevancy of Electronic Evidence. 2018 2nd IEEE Advanced Information Management,Communicates,Electronic and Automation Control Conference (IMCEC). :416–420.

T138 combat cyber crimes, electronic evidence have played an increasing role, but in judicial practice the electronic evidence were not highly applied because of the natural contradiction between the epistemic uncertainty of electronic evidence and the principle of discretionary evidence of judge in the court. in this paper, we put forward a layer-built method to analyze the relevancy of electronic evidence, and discussed their analytical process combined with the case study. The initial practice shows the model is feasible and has a consulting value in analyzing the relevancy of electronic evidence.

Liu, C., Singhal, A., Wijesekera, D..  2017.  A Layered Graphical Model for Mission Attack Impact Analysis. 2017 IEEE Conference on Communications and Network Security (CNS). :602–609.

Business or military missions are supported by hardware and software systems. Unanticipated cyber activities occurring in supporting systems can impact such missions. In order to quantify such impact, we describe a layered graphical model as an extension of forensic investigation. Our model has three layers: the upper layer models operational tasks that constitute the mission and their inter-dependencies. The middle layer reconstructs attack scenarios from available evidence to reconstruct their inter-relationships. In cases where not all evidence is available, the lower level reconstructs potentially missing attack steps. Using the three levels of graphs constructed in these steps, we present a method to compute the impacts of attack activities on missions. We use NIST National Vulnerability Database's (NVD)-Common Vulnerability Scoring System (CVSS) scores or forensic investigators' estimates in our impact computations. We present a case study to show the utility of our model.