Visible to the public Theoretical Cryptography

SoS Newsletter- Advanced Book Block

Theoretical Cryptography

Cryptography can only exist if there is a mathematical hardness to it so constructed as to be able to maintain a desired functionality, even under malicious attempts to change or destroy the prescribed functionality. Hence the foundations of theoretical cryptography are the paradigms, approaches and techniques used to conceptualize, define and provide solutions to natural ``security concerns' mathematically using probability-based definitions, various constructions, complexity theoretic primitives and proofs of security. Research into theoretical cryptography addresses the question of how to get from X to Z without allowing the adversary to go backwards from Z to X. The work presented here covers a range of approaches and methods, including block ciphers, random grids, obfuscation, and provable security .

  • Xiaotian Wu, Wei Sun, "Improved Tagged Visual Cryptography By Random Grids," Signal Processing, Volume 97, April, 2014, Pages 64-82. (ID#:14-1820) Available at: This paper introduces a (k,n) Tagged Visual Cryoptopgraphy (TVC) using the concept of a random grid (RG). This methods offers a solution to the challenges of (k,n) TVC, such as pixel expansion and code-book needed encoding. Cheating prevention is also a capability. Keywords: Cheat preventing, Random grid, Tagged-share, Visual cryptography, Visual secret sharing
  • Andrey Bogdanov, Elif Bilge Kavun, Elmar Tischhauser, Tolga Yalcin, "Large-Scale High-Resolution Computational Validation of Novel Complexity Models In Linear Cryptanalysis," Journal of Computational and Applied Mathematics, Volume 259, March, 2014, Pages 592-598. (ID#:14-1821) Available at: This paper analyzes the challenge of a new theoretical model that evaluates complexities of linear cryptanalysis attacks, based on an enhanced wrong key randomization hypothesis. This paper attempts to study this model in terms of large ciphers (32 bits and above), as the model has only been verified for 20-bit block size ciphers. Keywords: Block ciphers, Data complexity, Linear cryptanalysis, Wrong key randomization hypothesis, Theoretical cryptography
  • Vipul Goyal, Rafail Ostrovsky, Alessandra Scafuro, Ivan Visconti, "Black-box Non-Black-Box Zero Knowledge," STOC '14 Proceedings of the 46th Annual ACM Symposium on Theory of Computing, May 2014, Pages 515-524. (ID#:14-1822) Available at: Motivated by theoretical and practical interest, the challenging task of designing cryptographic protocols having only black-box access to primitives has generated various breakthroughs in the last decade. Despite such positive results, even though nowadays we know black-box constructions for secure two-party and multi-party computation even in constant rounds, there still are in Cryptography several constructions that critically require non-black-box use of primitives in order to securely realize some fundamental tasks. As such, the study of the gap between black-box and nonblack-box constructions still includes major open questions. In this work we make progress towards filling the above gap. We consider the case of black-box constructions for computations requiring that even the size of the input of a player remains hidden. We show how to commit to a string of arbitrary size and to prove statements over the bits of the string. Both the commitment and the proof are succinct, hide the input size and use standard primitives in a blackbox way. We achieve such a result by giving a black-box construction of an extendable Merkle tree that relies on a novel use of the "MPC in the head" paradigm of Ishai et al. [STOC 2007]. We show the power of our new techniques by giving the first black-box constant-round public-coin zero knowledge argument for NP. To achieve this result we use the nonblack-box simulation technique introduced by Barak [FOCS 2001], the PCP of Proximity introduced by Ben-Sasson et al. [STOC 2004], together with a black-box public-coin witness indistinguishable universal argument that we construct along the way. Keywords: black-box use of primitives, cryptography, input-size hiding protocols, public-coin zero-knowledge, Theoretical cryptography
  • Shafi Goldwasser, Guy N. Rothblum, "On Best-Possible Obfuscation," Journal of Cryptology, Volume 27 Issue 3, July 2014, Pages 480-505. (ID#:14-1823) Available at: This work explores the notion of best-possible obfuscation, which rivals current black-box obfuscation. Contrary to black-box obfuscation, which leaks no information, best-possible obfuscation leaks as little information as any other program of similar size and functionality. The idea is that any information not hidden by the obfuscated program is similarly not hidden by any other comparable program. Results of the study and discussed in detail.
  • Keita Emura, Goichiro Hanaoka, Yunlei Zhao, Proceedings Of The 2nd ACM Workshop On ASIA Public-Key Cryptography, Kyoto, Japan -- June 03 - 06, 2014. (ID#:14-1824) Available at: It is our great pleasure to welcome you to The 2nd ACM Asia Public-Key Cryptography Workshop -- AsiaPKC'14, held on June 3, 2014, in conjunction with The 9th ACM Symposium on Information, Computer and Communications Security (ASIACCS'14). Public key cryptography plays an essential role in ensuring many security properties required in data processing of various kinds. The theme of this workshop is novel public key cryptosystems for solving a wide range of real-life application problems. This workshop solicits original contributions on both applied and theoretical aspects of public key cryptography. The call for papers attracted 22 submissions from Asia, Europe, North America, and South America. The program committee accepted 6 papers based on their overall quality and novelty (acceptance ratio: 27%). In addition, the program includes an invited talk by Dr. Miyako Ohkubo of the National Institute of Information and Communications Technology (NICT), Japan ("Introduction of Structure-Preserving Signatures"). We hope these proceedings will serve as a valuable reference for researchers and practitioners in the field of public-key cryptography and its applications. Keywords: Theoretical cryptography
  • Michael Backes, Catalin Hritcu, Matteo Maffei, "Union, Intersection And Refinement Types And Reasoning About Type Disjointness For Secure Protocol Implementations," Journal of Computer Security - Foundational Aspects of Security, Volume 22 Issue 2, March 2014, Pages 301-353. (ID#:14-1825) Available at: This article examines a new system for type-based analyses of reference protocol implementations. The new type systems includes refinement types, union, intersection, and polymorphic types, which can statistically identify increased usages of asymmetric cryptography, authenticity, integrity, and applications based on zero-knowledge proofs. Keywords: Concurrent Lambda-Calculus, Intersection Types, Mechanized Metatheory, Reference Implementations, Refinement Types, Security Protocols, Type Systems, Union Types, Verification, Zero-Knowledge Proofs
  • Jiqiang Lu, Yongzhuang Wei, Jongsung Kim, Enes Pasalic, "The Higher-Order Meet-In-The-Middle Attack and Its Application To The Camellia Block Cipher," Theoretical Computer Science, Volume 527, March, 2014, Pages 102-122. (ID#:14-1826) Available at: The Camellia block cipher has a 128-bit block length, a user key of 128, 192 or 256 bits long, and a total of 18 rounds for a 128-bit key and 24 rounds for a 192 or 256-bit key. It is a Japanese CRYPTREC-recommended e-government cipher, a European NESSIE selected cipher and an ISO international standard. The meet-in-the-middle attack is a technique for analysing the security of a block cipher. In this paper, we propose an extension of the meet-in-the-middle attack, which we call the higher-order meet-in-the-middle (HO-MitM) attack; the core idea of the HO-MitM attack is to use multiple plaintexts to cancel some key-dependent component(s) or parameter(s) when constructing a basic unit of ''value-in-the-middle''. Then we introduce a novel approach, which combines integral cryptanalysis with the meet-in-the-middle attack, to construct HO-MitM attacks on 10-round Camellia with the FL/FL^-^1 functions under 128 key bits, 11-round Camellia with the FL/FL^-^1 functions under 192 key bits and 12-round Camellia with the FL/FL^-^1 functions under 256 key bits. Finally, we apply an existing approach to construct HO-MitM attacks on 14-round Camellia without the FL/FL^-^1 functions under 192 key bits and 16-round Camellia without the FL/FL^-^1 functions under 256 key bits. The HO-MitM attack can potentially be used to cryptanalyse other block ciphers. Keywords: Block cipher, Camellia, Cryptology, Integral cryptanalysis, Meet-in-the-middle attack
  • Shaohua Tang, Lingling Xu, "Towards Provably Secure Proxy Signature Scheme Based On Isomorphisms Of Polynomials," Future Generation Computer Systems, Volume 30, January, 2014, Pages 91-97. (ID#:14-1827) Available at: This paper proposes a proxy signature scheme based on the Isomorphism of Polynomials (IP) challenge, under the umbrella of Multivariate Public Key Cryptography (MPKC). This signature scheme would ideally be able to resist projected quantum computing attacks, a particularly constructive gain in understanding provable security for MPKCs. Keywords: Isomorphism of Polynomials, Multivariate Public Key Cryptography, Post-Quantum Cryptography, Provable security, Proxy signature
  • Osterweil, E.; Massey, D.; McPherson, D.; Lixia Zhang, "Verifying Keys through Publicity and Communities of Trust: Quantifying Off-Axis Corroboration," Parallel and Distributed Systems, IEEE Transactions on , vol.25, no.2, pp.283,291, Feb. 2014. (ID#:14-1828) Available at: The DNS Security Extensions (DNSSEC) arguably make DNS the first core Internet system to be protected using public key cryptography. The success of DNSSEC not only protects the DNS, but has generated interest in using this secured global database for new services such as those proposed by the IETF DANE working group. However, continued success is only possible if several important operational issues can be addressed. For example, .gov and .arpa have already suffered misconfigurations where DNS continued to function properly, but DNSSEC failed (thus, orphaning their entire subtrees in DNSSEC). Internet-scale verification systems must tolerate this type of chaos, but what kind of verification can one derive for systems with dynamism like this? In this paper, we propose to achieve robust verification with a new theoretical model, called Public Data, which treats operational deployments as Communities of Trust (CoTs) and makes them the verification substrate. Using a realization of the above idea, called Vantages, we quantitatively show that using a reasonable DNSSEC deployment model and a typical choice of a CoT, an adversary would need to be able to have visibility into and perform on-path Man-in-the-Middle (MitM) attacks on arbitrary traffic into and out of up to 90 percent of the all of the Autonomous Systems (ASes) in the Internet before having even a 10 percent chance of spoofing a DNSKEY. Further, our limited deployment of Vantages has outperformed the verifiability of DNSSEC and has properly validated its data up to 99.5 percent of the time. Keywords: Internet; public key cryptography; trusted computing;.arpa; .gov; AS;CoT; DNS security extensions; DNSKEY spoofing; DNSSEC deployment model; IETF DANE working group; Internet-scale verification systems; MitM; Vantages; autonomous systems; communities of trust; core Internet system; key verification; man-in-the-middle attacks; off-axis corroboration; public data; public key cryptography; secured global database; DNDKEY; DNSSEC; p2p; verification
  • Young-Chang Hou; Shih-Chieh Wei; Chia-Yin Lin, "Random-Grid-Based Visual Cryptography Schemes," Circuits and Systems for Video Technology, IEEE Transactions on , vol.24, no.5, pp.733,744, May 2014. (ID#:14-1829) Available at: This paper discusses a random-grid-based nonexpanded visual cryptography scheme for generating both meaningful and noise-like shares. First, the distribution of black pixels on the share images and the stack image is analyzed. A probability allocation method is then proposed that is capable of producing the best contrast in both the share images and the stack image. With our method, not only can different cover images be used to hide the secret image, but the contrast can be adjusted as needed. The most important result is the improvement of the visual quality of both the share images and the stack image to their theoretical maximum. Our meaningful visual secret sharing method is shown in experiments to be superior to past methods. Keywords: cryptography ;image processing ;probability; black pixels; probability allocation method; random-grid-based nonexpanded visual cryptography scheme; secret image; share images; stack image; visual secret sharing method; Encryption; Image color analysis; Information management; Stacking; Visualization; Meaningful shares; random grid; secret sharing; visual cryptography
  • Portmann, C., "Key Recycling in Authentication," Information Theory, IEEE Transactions on , vol.60, no.7, pp.4383,4396, July 2014. (ID#:14-1830) Available at: In their seminal work on authentication, Wegman and Carter propose that to authenticate multiple messages, it is sufficient to reuse the same hash function as long as each tag is encrypted with a one-time pad. They argue that because the one-time pad is perfectly hiding, the hash function used remains completely unknown to the adversary. Since their proof is not composable, we revisit it using a composable security framework. It turns out that the above argument is insufficient: if the adversary learns whether a corrupted message was accepted or rejected, information about the hash function is leaked, and after a bounded finite amount of rounds it is completely known. We show however that this leak is very small: Wegman and Carter's protocol is still ( varepsilon ) -secure, if ( varepsilon ) -almost strongly universal (_2) hash functions are used. This implies that the secret key corresponding to the choice of hash function can be reused in the next round of authentication without any additional error than this ( varepsilon ). We also show that if the players have a mild form of synchronization, namely that the receiver knows when a message should be received, the key can be recycled for any arbitrary task, not only new rounds of authentication. Keywords: Abstracts; Authentication; Computational modeling; Cryptography; Protocols; Recycling; Cryptography; authentication; composable security; information-theoretic security
  • Singh, H.; Sachdev, A, "The Quantum way of Cloud Computing," Optimization, Reliabilty, and Information Technology (ICROIT), 2014 International Conference on , vol., no., pp.397,400, 6-8 Feb. 2014. (ID#:14-1831) Available at: Quantum Computing and Cloud Computing are the technologies which have the capability to shape the future of computing. Quantum computing focuses on creating super-fast computers using the concepts of quantum physics whereas Cloud computing allows the computing power to be provided as a service. This paper presents a theoretical approach towards the possibility of a Quantum-Cloud i.e. quantum computing as a service. This will combine the fields of quantum computing and cloud computing, resulting into an evolutionary technology. Also, this paper discusses the possible advantages of this in the near future. Keywords: cloud computing; quantum computing; cloud computing; quantum computing; super-fast computers; Cryptography; Hardware; Quantum computing; Cloud Computing; Quantum Cloud; Quantum Computing; Qubit
  • Kishore, N.; Kapoor, B., "An efficient parallel algorithm for hash computation in security and forensics applications," Advance Computing Conference (IACC), 2014 IEEE International , vol., no., pp.873,877, 21-22 Feb. 2014. (ID#:14-1832) Available at: Hashing algorithms are used extensively in information security and digital forensics applications. This paper presents an efficient parallel algorithm hash computation. It's a modification of the SHA-1 algorithm for faster parallel implementation in applications such as the digital signature and data preservation in digital forensics. The algorithm implements recursive hash to break the chain dependencies of the standard hash function. We discuss the theoretical foundation for the work including the collision probability and the performance implications. The algorithm is implemented using the OpenMP API and experiments performed using machines with multicore processors. The results show a performance gain by more than a factor of 3 when running on the 8-core configuration of the machine. Keywords: application program interfaces; cryptography; digital forensics; digital signatures; file organization; parallel algorithms; probability; OpenMP API;SHA-1 algorithm;c ollision probability; data preservation; digital forensics; digital signature; hash computation; hashing algorithms; information security; parallel algorithm; standard hash function; lgorithm design and analysis; Conferences; Cryptography; Multicore processing; Program processors; Standards; Cryptographic Hash Function; Digital Forensics; Digital Signature; MD5; Multicore Processors; OpenMP; SHA-1
  • Ferretti, L.; Colajanni, M.; Marchetti, M., "Distributed, Concurrent, and Independent Access to Encrypted Cloud Databases," Parallel and Distributed Systems, IEEE Transactions on , vol.25, no.2, pp.437,446, Feb. 2014. (ID#:14-1833) Available at: Placing critical data in the hands of a cloud provider should come with the guarantee of security and availability for data at rest, in motion, and in use. Several alternatives exist for storage services, while data confidentiality solutions for the database as a service paradigm are still immature. We propose a novel architecture that integrates cloud database services with data confidentiality and the possibility of executing concurrent operations on encrypted data. This is the first solution supporting geographically distributed clients to connect directly to an encrypted cloud database, and to execute concurrent and independent operations including those modifying the database structure. The proposed architecture has the further advantage of eliminating intermediate proxies that limit the elasticity, availability, and scalability properties that are intrinsic in cloud-based solutions. The efficacy of the proposed architecture is evaluated through theoretical analyses and extensive experimental results based on a prototype implementation subject to the TPC-C standard benchmark for different numbers of clients and network latencies. Keywords: cloud computing; cryptography; database management systems; TPC-C standard benchmark; availability property; cloud database services; concurrent access; data confidentiality; database structure modification; distributed access; elasticity property; encrypted cloud database; encrypted data concurrent operation execution; geographically distributed clients; independent access; intermediate proxies elimination; network latencies; scalability property; Cloud; SecureDBaaS; confidentiality; database; security
  • Alahmadi, A; Abdelhakim, M.; Jian Ren; Tongtong Li, "Defense Against Primary User Emulation Attacks in Cognitive Radio Networks Using Advanced Encryption Standard," Information Forensics and Security, IEEE Transactions on , vol.9, no.5, pp.772,781, May 2014. (ID#:14-1834) Available at: This paper considers primary user emulation attacks in cognitive radio networks operating in the white spaces of the digital TV (DTV) band. We propose a reliable AES-assisted DTV scheme, in which an AES-encrypted reference signal is generated at the TV transmitter and used as the sync bits of the DTV data frames. By allowing a shared secret between the transmitter and the receiver, the reference signal can be regenerated at the receiver and used to achieve accurate identification of the authorized primary users. In addition, when combined with the analysis on the autocorrelation of the received signal, the presence of the malicious user can be detected accurately whether or not the primary user is present. We analyze the effectiveness of the proposed approach through both theoretical analysis and simulation examples. It is shown that with the AES-assisted DTV scheme, the primary user, as well as malicious user, can be detected with high accuracy under primary user emulation attacks. It should be emphasized that the proposed scheme requires no changes in hardware or system structure except for a plug-in AES chip. Potentially, it can be applied directly to today's DTV system under primary user emulation attacks for more efficient spectrum sharing. Keywords: cognitive radio; correlation methods; public key cryptography; AES-encrypted reference signal; DTV data frames; TV transmitter; advanced encryption standard; authorized primary users; autocorrelation; cognitive radio networks; digital TV band; malicious user; plug-in chip;primary user emulation attacks; reference signal; spectrum sharing; sync bits; white spaces; Digital TV; Emulation; Manganese; Random variables; Receivers; Synchronization; Transmitters; Network security; dynamic spectrum access (DSA);eight-level vestigial sideband (8-VSB);primary user emulation attacks (PUEA);secure spectrum sensing
  • Hu, Pengfei; Xing, Kai; Cheng, Xiuzhen; Wei, Hao; Zhu, Haojin, "Information leaks out: Attacks and countermeasures on compressive data gathering in wireless sensor networks," INFOCOM, 2014 Proceedings IEEE , vol., no., pp.1258,1266, April 27 2014-May 2 2014. (ID#:14-1835) Available at: Compressive sensing (CS) has been viewed as a promising technology to greatly improve the communication efficiency of data gathering in wireless sensor networks. However, this new data collection paradigm may bring in new threats but few study has paid attention to prevent information leakage during compressive data gathering. In this paper, we identify two statistical inference attacks and demonstrate that traditional compressive data gathering may suffer from serious information leakage under these attacks. In our theoretical analysis, we quantitatively analyze the estimation error of compressive data gathering through extensive statistical analysis, based on which we propose a new secure compressive data aggregation scheme by adaptively changing the measurement coefficients at each sensor and correspondingly at the sink without the need of time synchronization. In our analysis, we show that the proposed scheme could significantly improve data confidentiality at light computational and communication overhead. Keywords: Compressed sensing; Conferences; Cryptography; matching pursuit algorithms; Vectors; Wireless sensor networks


Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.