# Biblio

Found 39 results

Filters: Keyword is theoretical cryptography  [Clear All Filters]
2020-03-04
.  2019.  2019 IEEE 20th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC). :1–5.

This paper presents the recent progress in studying the algorithmic computability of capacity expressions of secure communication systems. Several communication scenarios are discussed and reviewed including the classical wiretap channel, the wiretap channel with an active jammer, and the problem of secret key generation.

.  2019.  2019 IEEE 19th International Conference on Communication Technology (ICCT). :113–117.

In recent years, secret key generation based on physical layer security has gradually attracted high attentions. The wireless channel reciprocity and eavesdropping attack are critical problems in secret key generation studies. In this paper, we carry out a simulation and experimental study of channel reciprocity in terms of measuring channel state information (CSI) in both time division duplexing (TDD) and frequency division duplexing (FDD) modes. In simulation study, a close eavesdropping wiretap channel model is introduced to evaluate the security of the CSI by using Pearson correlation coefficient. In experimental study, an indoor wireless CSI measurement system is built with N210 and X310 universal software radio peripheral (USRP) platforms. In TDD mode, theoretical analysis and most of experimental results show that the closer eavesdropping distance, the higher CSI correlation coefficient between eavesdropping channel and legitimate channel. However, in actual environment, when eavesdropping distance is too close (less than 1/4 wavelength), this CSI correlation seriously dropped. In FDD mode, both theoretical analysis and experimental results show that the wireless channel still owns some reciprocity. When frequency interval increases, the FDD channel reciprocity in actual environment is better than that in theoretical analysis.

.  2019.  2019 IEEE 15th International Conference on the Experience of Designing and Application of CAD Systems (CADSM). :1–4.

The analysis of applied tasks and methods of entropy signal processing are carried out in this article. The theoretical comments about the specific schemes of special processors for the determination of probability and correlation activity are given. The perspective of the influence of probabilistic entropy of C. Shannon as cipher signal receivers is reviewed. Examples of entropy-manipulated signals and system characteristics of the proposed special processors are given.

.  2019.  2019 International Conference on Advanced Science and Engineering (ICOASE). :204–207.

The current paper proposes a method to combine the theoretical concepts of the parallel processing created by the DNA computing and GA environments, with the effectiveness novel mechanism of the distinction and discover of the cryptosystem keys. Three-level contributions to the current work, the first is the adoption of a final key sequence mechanism by the principle of interconnected sequence parts, the second to exploit the principle of the parallel that provides GA in the search for the counter value of the sequences of the challenge to the mechanism of the discrimination, the third, the most important and broadening the breaking of the cipher, is the harmony of the principle of the parallelism that has found via the DNA computing to discover the basic encryption key. The proposed method constructs a combined set of files includes binary sequences produced from substitution of the guess attributes of the binary equations system of the cryptosystem, as well as generating files that include all the prospects of the DNA strands for all successive cipher characters, the way to process these files to be obtained from the first character file, where extract a key sequence of each sequence from mentioned file and processed with the binary sequences that mentioned the counter produced from GA. The aim of the paper is exploitation and implementation the theoretical principles of the parallelism that providing via biological environment with the new sequences recognition mechanism in the cryptanalysis.

.  2019.  2019 IEEE International Conference on Image Processing (ICIP). :3020–3022.

In this research project, we are interested by finding solutions to the problem of image analysis and processing in the encrypted domain. For security reasons, more and more digital data are transferred or stored in the encrypted domain. However, during the transmission or the archiving of encrypted images, it is often necessary to analyze or process them, without knowing the original content or the secret key used during the encryption phase. We propose to work on this problem, by associating theoretical aspects with numerous applications. Our main contributions concern: data hiding in encrypted images, correction of noisy encrypted images, recompression of crypto-compressed images and secret image sharing.

.  2019.  2019 IEEE International Symposium on Information Theory (ISIT). :822–826.

It is investigated how to achieve semantic security for the wiretap channel. A new type of functions called biregular irreducible (BRI) functions, similar to universal hash functions, is introduced. BRI functions provide a universal method of establishing secrecy. It is proved that the known secrecy rates of any discrete and Gaussian wiretap channel are achievable with semantic security by modular wiretap codes constructed from a BRI function and an error-correcting code. A characterization of BRI functions in terms of edge-disjoint biregular graphs on a common vertex set is derived. This is used to study examples of BRI functions and to construct new ones.

.  2019.  2019 International Conference on Advanced Communication Technologies and Networking (CommNet). :1–9.

Access authentication is a key technology to identify the legitimacy of mobile users when accessing the space-ground integrated networks (SGIN). A hierarchical identity-based signature over lattice (L-HIBS) based mobile access authentication mechanism is proposed to settle the insufficiencies of existing access authentication methods in SGIN such as high computational complexity, large authentication delay and no-resistance to quantum attack. Firstly, the idea of hierarchical identity-based cryptography is introduced according to hierarchical distribution of nodes in SGIN, and a hierarchical access authentication architecture is built. Secondly, a new L-HIBS scheme is constructed based on the Small Integer Solution (SIS) problem to support the hierarchical identity-based cryptography. Thirdly, a mobile access authentication protocol that supports bidirectional authentication and shared session key exchange is designed with the aforementioned L-HIBS scheme. Results of theoretical analysis and simulation experiments suggest that the L-HIBS scheme possesses strong unforgeability of selecting identity and adaptive selection messages under the standard security model, and the authentication protocol has smaller computational overhead and shorter private keys and shorter signature compared to given baseline protocols.

.  2019.  2019 Federated Conference on Computer Science and Information Systems (FedCSIS). :327–332.

We propose a new key sharing protocol executed through any constant parameter noiseless public channel (as Internet itself) without any cryptographic assumptions and protocol restrictions on SNR in the eavesdropper channels. This protocol is based on extraction by legitimate users of eigenvalues from randomly generated matrices. A similar protocol was proposed recently by G. Qin and Z. Ding. But we prove that, in fact, this protocol is insecure and we modify it to be both reliable and secure using artificial noise and privacy amplification procedure. Results of simulation prove these statements.

.  2019.  2019 IEEE International Conference on Blockchain (Blockchain). :237–244.

Blockchain networks which employ Proof-of-Work in their consensus mechanism may face inconsistencies in the form of forks. These forks are usually resolved through the application of block selection rules (such as the Nakamoto consensus). In this paper, we investigate the cause and length of forks for the Bitcoin network. We develop theoretical formulas which model the Bitcoin consensus and network protocols, based on an Erdös-Rényi random graph construction of the overlay network of peers. Our theoretical model addresses the effect of key parameters on the fork occurrence probability, such as block propagation delay, network bandwidth, and block size. We also leverage this model to estimate the weight of fork branches. Our model is implemented using the network simulator OMNET++ and validated by historical Bitcoin data. We show that under current conditions, Bitcoin will not benefit from increasing the number of connections per node.

.  2019.  2019 2nd International Conference on Computer Applications Information Security (ICCAIS). :1–6.

Due to the importance of securing electronic transactions, many cryptographic protocols have been employed, that mainly depend on distributed keys between the intended parties. In classical computers, the security of these protocols depends on the mathematical complexity of the encoding functions and on the length of the key. However, the existing classical algorithms 100% breakable with enough computational power, which can be provided by quantum machines. Moving to quantum computation, the field of security shifts into a new area of cryptographic solutions which is now the field of quantum cryptography. The era of quantum computers is at its beginning. There are few practical implementations and evaluations of quantum protocols. Therefore, the paper defines a well-known quantum key distribution protocol which is BB84 then provides a practical implementation of it on IBM QX software. The practical implementations showed that there were differences between BB84 theoretical expected results and the practical implementation results. Due to this, the paper provides a statistical analysis of the experiments by comparing the standard deviation of the results. Using the BB84 protocol the existence of a third-party eavesdropper can be detected. Thus, calculations of the probability of detecting/not detecting a third-party eavesdropping have been provided. These values are again compared to the theoretical expectation. The calculations showed that with the greater number of qubits, the percentage of detecting eavesdropper will be higher.

2019-02-14
.  2018.  Proceedings of the 2018 International Conference on Management of Data. :905-920.
This paper studies the set similarity join problem with overlap constraints which, given two collections of sets and a constant c, finds all the set pairs in the datasets that share at least c common elements. This is a fundamental operation in many fields, such as information retrieval, data mining, and machine learning. The time complexity of all existing methods is O(n2) where n is the total size of all the sets. In this paper, we present a size-aware algorithm with the time complexity of O(n2-over 1 c k1 over 2c)=o(n2)+O(k), where k is the number of results. The size-aware algorithm divides all the sets into small and large ones based on their sizes and processes them separately. We can use existing methods to process the large sets and focus on the small sets in this paper. We develop several optimization heuristics for the small sets to improve the practical performance significantly. As the size boundary between the small sets and the large sets is crucial to the efficiency, we propose an effective size boundary selection algorithm to judiciously choose an appropriate size boundary, which works very well in practice. Experimental results on real-world datasets show that our methods achieve high performance and outperform the state-of-the-art approaches by up to an order of magnitude.
.  2018.  Proceedings of the 6th Workshop on Encrypted Computing & Applied Homomorphic Cryptography. :13-24.
This work describes a three-times (\$3$\backslash$times\$) improvement to the performance of secure computation of AES over a network of three parties with an honest majority. The throughput that is achieved is even better than that of computing AES in some scenarios of local (non-private) computation. The performance improvement is achieved through an optimization of the generic secure protocol, and, more importantly, through an optimization of the description of the AES function to support more efficient secure computation, and an optimization of the protocol to the underlying architecture. This demonstrates that the development process of efficient secure computation must include adapting the description of the computed function to be tailored to the protocol, and adapting the implementation of the protocol to the architecture. This work focuses on the secure computation of AES since it has been widely investigated as a de-facto standard performance benchmark for secure computation, and is also important by itself for many applications. Furthermore, parts of the improvements are general and not specific to AES, and can be applied to secure computation of arbitrary functions.
.  2018.  Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing. :699-708.
We study secret sharing schemes for general (non-threshold) access structures. A general secret sharing scheme for n parties is associated to a monotone function F:\0,1\n$\rightarrow$\0,1\. In such a scheme, a dealer distributes shares of a secret s among n parties. Any subset of parties T $\subseteq$ [n] should be able to put together their shares and reconstruct the secret s if F(T)=1, and should have no information about s if F(T)=0. One of the major long-standing questions in information-theoretic cryptography is to minimize the (total) size of the shares in a secret-sharing scheme for arbitrary monotone functions F. There is a large gap between lower and upper bounds for secret sharing. The best known scheme for general F has shares of size 2n-o(n), but the best lower bound is $Ømega$(n2/logn). Indeed, the exponential share size is a direct result of the fact that in all known secret-sharing schemes, the share size grows with the size of a circuit (or formula, or monotone span program) for F. Indeed, several researchers have suggested the existence of a representation size barrier which implies that the right answer is closer to the upper bound, namely, 2n-o(n). In this work, we overcome this barrier by constructing a secret sharing scheme for any access structure with shares of size 20.994n and a linear secret sharing scheme for any access structure with shares of size 20.999n. As a contribution of independent interest, we also construct a secret sharing scheme with shares of size 2Õ($\surd$n) for 2n n/2 monotone access structures, out of a total of 2n n/2$\cdot$ (1+O(logn/n)) of them. Our construction builds on recent works that construct better protocols for the conditional disclosure of secrets (CDS) problem.
.  2018.  2018 IEEE/ACS 15th International Conference on Computer Systems and Applications (AICCSA). :1-6.

Several algorithms were introduced in data encryption and decryptionsto protect threats and intruders from stealing and destroying data. A DNA cryptography is a new concept that has attracted great interest in the information security. In this paper, we propose a new enhanced polyalphabetic cipher algorithm (EPCA) as enhanced algorithm for the Vigenere cipher to avoid the limitations and the weakness of Vigenere cipher. A DNA technology is used to convert binary data to DNA strand. We compared the EPCA with Vigenere cipher in terms of memory space and run time. The EPCA has theoretical run time of O(N), at worst case. The EPCA shows better performance in average memory space and closed results in average running time, for the tested data.

.  2018.  2018 IEEE International Symposium on Information Theory (ISIT). :581-585.
We study side-channel attacks for the Shannon cipher system. To pose side channel-attacks to the Shannon cipher system, we regard them as a signal estimation via encoded data from two distributed sensors. This can be formulated as the one helper source coding problem posed and investigated by Ahlswede, Korner(1975), and Wyner(1975). We further investigate the posed problem to derive new secrecy bounds. Our results are derived by a coupling of the result Watanabe and Oohama(2012) obtained on bounded storage eavesdropper with the exponential strong converse theorem Oohama(2015) established for the one helper source coding problem.
.  2018.  2018 Second International Conference on Inventive Communication and Computational Technologies (ICICCT). :1932-1935.
Mobile Ad-hoc Networks are dynamic in nature and do not have fixed infrastructure to govern nodes in the networks. The mission lies ahead in coordinating among such dynamically shifting nodes. The root problem of identifying and isolating misbehaving nodes that refuse to forward packets in multi-hop ad hoc networks is solved by the development of a comprehensive system called Audit-based Misbehavior Detection (AMD) that can efficiently isolates selective and continuous packet droppers. AMD evaluates node behavior on a per-packet basis, without using energy-expensive overhearing techniques or intensive acknowledgment schemes. Moreover, AMD can detect selective dropping attacks even in end-to-end encrypted traffic and can be applied to multi-channel networks. Game theoretical approaches are more suitable in deciding upon the reward mechanisms for which the mobile nodes operate upon. Rewards or penalties have to be decided by ensuring a clean and healthy MANET environment. A non-routine yet surprise alterations are well required in place in deciding suitable and safe reward strategies. This work focuses on integrating a Audit-based Misbehaviour Detection (AMD)scheme and an incentive based reputation scheme with game theoretical approach called Supervisory Game to analyze the selfish behavior of nodes in the MANETs environment. The proposed work GAMD significantly reduces the cost of detecting misbehavior nodes in the network.
.  2018.  2018 IEEE Photonics Society Summer Topical Meeting Series (SUM). :9-10.
The use of quantum mechanical signals in communication opens up the opportunity to build new communication systems that accomplishes tasks that communication with classical signals structures cannot achieve. Prominent examples are Quantum Key Distribution Protocols, which allows the generation of secret keys without computational assumptions of adversaries. Over the past decade, protocols have been developed that achieve tasks that can also be accomplished with classical signals, but the quantum version of the protocol either uses less resources, or leaks less information between the involved parties. The gap between quantum and classical can be exponential in the input size of the problems. Examples are the comparison of data, the scheduling of appointments and others. Until recently, it was thought that these protocols are of mere conceptual value, but that the quantum advantage could not be realized. We changed that by developing quantum optical versions of these abstract protocols that can run with simple laser pulses, beam-splitters and detectors. [1-3] By now the first protocols have been successfully implemented [4], showing that a quantum advantage can be realized. The next step is to find and realize protocols that have a high practical value.
.  2018.  2018 IEEE 15th International Conference on Mobile Ad Hoc and Sensor Systems (MASS). :335-343.
This paper proposed an advanced round modification fault analysis (RMFA) at the theoretical level on AEGIS-128, which is one of seven finalists in CAESAR competition. First, we clarify our assumptions and simplifications on the attack model, focusing on the encryption security. Then, we emphasize the difficulty of applying vanilla RMFA to AEGIS-128 in the practical case. Finally we demonstrate our advanced fault analysis on AEGIS-128 using machine-solver based algebraic techniques. Our enhancement can be used to conquer the practical scenario which is difficult for vanilla RMFA. Simulation results show that when the fault is injected to the initialization phase and the number of rounds is reduced to one, two samples of injections can extract the whole 128 key bits within less than two hours. This work can also be extended to other versions such as AEGIS-256.
.  2018.  2018 7th International Conference on Computers Communications and Control (ICCCC). :215-223.
Nowadays public-key cryptography is based on number theory problems, such as computing the discrete logarithm on an elliptic curve or factoring big integers. Even though these problems are considered difficult to solve with the help of a classical computer, they can be solved in polynomial time on a quantum computer. Which is why the research community proposed alternative solutions that are quantum-resistant. The process of finding adequate post-quantum cryptographic schemes has moved to the next level, right after NIST's announcement for post-quantum standardization. One of the oldest quantum-resistant proposition goes back to McEliece in 1978, who proposed a public-key cryptosystem based on coding theory. It benefits of really efficient algorithms as well as a strong mathematical background. Nonetheless, its security has been challenged many times and several variants were cryptanalyzed. However, some versions remain unbroken. In this paper, we propose to give some background on coding theory in order to present some of the main flawless in the protocols. We analyze the existing side-channel attacks and give some recommendations on how to securely implement the most suitable variants. We also detail some structural attacks and potential drawbacks for new variants.
2018-04-11
.  2017.  Proceedings of the 29th ACM Symposium on Parallelism in Algorithms and Architectures. :3–12.

A common approach for designing scalable algorithms for massive data sets is to distribute the computation across, say k, machines and process the data using limited communication between them. A particularly appealing framework here is the simultaneous communication model whereby each machine constructs a small representative summary of its own data and one obtains an approximate/exact solution from the union of the representative summaries. If the representative summaries needed for a problem are small, then this results in a communication-efficient and $\backslash$emph\round-optimal\ (requiring essentially no interaction between the machines) protocol. Some well-known examples of techniques for creating summaries include sampling, linear sketching, and composable coresets. These techniques have been successfully used to design communication efficient solutions for many fundamental graph problems. However, two prominent problems are notably absent from the list of successes, namely, the maximum matching problem and the minimum vertex cover problem. Indeed, it was shown recently that for both these problems, even achieving a modest approximation factor of $\backslash$polylog\(n)\ requires using representative summaries of size $\backslash$widetilde\$\backslash$Omega\(ntextasciicircum2) i.e. essentially no better summary exists than each machine simply sending its entire input graph. The main insight of our work is that the intractability of matching and vertex cover in the simultaneous communication model is inherently connected to an adversarial partitioning of the underlying graph across machines. We show that when the underlying graph is randomly partitioned across machines, both these problems admit $\backslash$emph\randomized composable coresets\ of size $\backslash$widetildeØ\(n) that yield an $\backslash$widetildeØ\(1)-approximate solution$\backslash$footnote\Here and throughout the paper, we use $\backslash$Ot($\backslash$cdot) notation to suppress $\backslash$polylog\(n)\ factors, where n is the number of vertices in the graph. In other words, a small subgraph of the input graph at each machine can be identified as its representative summary and the final answer then is obtained by simply running any maximum matching or minimum vertex cover algorithm on these combined subgraphs. This results in an Õ(1)-approximation simultaneous protocol for these problems with Õ(nk) total communication when the input is randomly partitioned across k machines. We also prove our results are optimal in a very strong sense: we not only rule out existence of smaller randomized composable coresets for these problems but in fact show that our $\backslash$Ot(nk) bound for total communication is optimal for em any simultaneous communication protocol (i.e. not only for randomized coresets) for these two problems. Finally, by a standard application of composable coresets, our results also imply MapReduce algorithms with the same approximation guarantee in one or two rounds of communication, improving the previous best known round complexity for these problems.\vphantom\

.  2017.  Proceedings of the 29th ACM Symposium on Parallelism in Algorithms and Architectures. :95–100.

We show that the basic semidefinite programming relaxation value of any constraint satisfaction problem can be computed in NC; that is, in parallel polylogarithmic time and polynomial work. As a complexity-theoretic consequence we get that $\backslash$MIPone[k,c,s] $\backslash$subseteq $\backslash$PSPACE provided s/c $\backslash$leq (.62-o(1))k/2textasciicircumk, resolving a question of Austrin, H$\backslash$aa stad, and Pass. Here $\backslash$MIPone[k,c,s] is the class of languages decidable with completeness c and soundness s by an interactive proof system with k provers, each constrained to communicate just 1 bit.

.  2017.  Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. :491–505.

Oblivious Random Access Machine (ORAM) enables a client to access her data without leaking her access patterns. Existing client-efficient ORAMs either achieve O(log N) client-server communication blowup without heavy computation, or O(1) blowup but with expensive homomorphic encryptions. It has been shown that O(log N) bandwidth blowup might not be practical for certain applications, while schemes with O(1) communication blowup incur even more delay due to costly homomorphic operations. In this paper, we propose a new distributed ORAM scheme referred to as Shamir Secret Sharing ORAM (S3ORAM), which achieves O(1) client-server bandwidth blowup and O(1) blocks of client storage without relying on costly partial homomorphic encryptions. S3ORAM harnesses Shamir Secret Sharing, tree-based ORAM structure and a secure multi-party multiplication protocol to eliminate costly homomorphic operations and, therefore, achieves O(1) client-server bandwidth blowup with a high computational efficiency. We conducted comprehensive experiments to assess the performance of S3ORAM and its counterparts on actual cloud environments, and showed that S3ORAM achieves three orders of magnitude lower end-to-end delay compared to alternatives with O(1) client communication blowup (Onion-ORAM), while it is one order of magnitude faster than Path-ORAM for a network with a moderate bandwidth quality. We have released the implementation of S3ORAM for further improvement and adaptation.

.  2017.  Proceedings of the 12th Annual Conference on Cyber and Information Security Research. :6:1–6:7.

Elliptic curve cryptography (ECC) is a relatively newer form of public key cryptography that provides more security per bit than other forms of cryptography still being used today. We explore the mathematical structure and operations of elliptic curves and how those properties make curves suitable tools for cryptography. A brief historical context is given followed by the safety of usage in production, as not all curves are free from vulnerabilities. Next, we compare ECC with other popular forms of cryptography for both key exchange and digital signatures, in terms of security and speed. Traditional applications of ECC, both theoretical and in-practice, are presented, including key exchange for web browser usage and DNSSEC. We examine multiple uses of ECC in a mobile context, including cellular phones and the Internet of Things. Modern applications of curves are explored, such as iris recognition, RFID, smart grid, as well as an application for E-health. Finally, we discuss how ECC stacks up in a post-quantum cryptography world.

.  2017.  Proceedings of the 22Nd ACM on Symposium on Access Control Models and Technologies. :143–154.

It is increasingly common to outsource data storage to untrusted, third party (e.g. cloud) servers. However, in such settings, low-level online reference monitors may not be appropriate for enforcing read access, and thus cryptographic enforcement schemes (CESs) may be required. Much of the research on cryptographic access control has focused on the use of specific primitives and, primarily, on how to generate appropriate keys and fails to model the access control system as a whole. Recent work in the context of role-based access control has shown a gap between theoretical policy specification and computationally secure implementations of access control policies, potentially leading to insecure implementations. Without a formal model, it is hard to (i) reason about the correctness and security of a CES, and (ii) show that the security properties of a particular cryptographic primitive are sufficient to guarantee security of the CES as a whole. In this paper, we provide a rigorous definitional framework for a CES that enforces read-only information flow policies (which encompass many practical forms of access control, including role-based policies). This framework (i) provides a tool by which instantiations of CESs can be proven correct and secure, (ii) is independent of any particular cryptographic primitives used to instantiate a CES, and (iii) helps to identify the limitations of current primitives (e.g. key assignment schemes) as components of a CES.

.  2017.  Proceedings of the 2017 International Conference on Cryptography, Security and Privacy. :28–32.

As the adaptive steganography selects edge and texture area for loading, the theoretical analysis is limited by modeling difficulty. This paper introduces a novel method to study pixel-value difference (PVD) embedding scheme. First, the difference histogram values of cover image are used as parameters, and a variance formula for PVD stego noise is obtained. The accuracy of this formula has been verified through analysis with standard pictures. Second, the stego noise is divided into six kinds of pixel regions, and the regional noise variances are utilized to compare the security between PVD and least significant bit matching (LSBM) steganography. A mathematical conclusion is presented that, with the embedding capacity less than 2.75 bits per pixel, PVD is always not safer than LSBM under the same embedding rate, regardless of region selection. Finally, 10000 image samples are used to observe the validity of mathematical conclusion. For most images and regions, the data are also shown to be consistent with the prior judgment. Meanwhile, the cases of exception are analyzed seriously, and are found to be caused by randomness of pixel selection and abandoned blocks in PVD scheme. In summary, the unity of theory and practice completely indicates the effectiveness of our new method.