Visible to the public Biblio

Found 1024 results

Filters: First Letter Of Title is C  [Clear All Filters]
1982
Robling Denning, Dorothy Elizabeth.  1982.  Cryptography and Data Security. :414.

Electronic computers have evolved from exiguous experimental enterprises in the 1940s to prolific practical data processing systems in the 1980s. As we have come to rely on these systems to process and store data, we have also come to wonder about their ability to protect valuable data.

Data security is the science and study of methods of protecting data in computer and communication systems from unauthorized disclosure and modification. The goal of this book is to introduce the mathematical principles of data security and to show how these principles apply to operating systems, database systems, and computer networks. The book is for students and professionals seeking an introduction to these principles. There are many references for those who would like to study specific topics further.

Data security has evolved rapidly since 1975. We have seen exciting developments in cryptography: public-key encryption, digital signatures, the Data Encryption Standard (DES), key safeguarding schemes, and key distribution protocols. We have developed techniques for verifying that programs do not leak confidential data, or transmit classified data to users with lower security clearances. We have found new controls for protecting data in statistical databases--and new methods of attacking these databases. We have come to a better understanding of the theoretical and practical limitations to security.

This article was identified by the SoS Best Scientific Cybersecurity Paper Competition Distinguished Experts as a Science of Security Significant Paper. The Science of Security Paper Competition was developed to recognize and honor recently published papers that advance the science of cybersecurity. During the development of the competition, members of the Distinguished Experts group suggested that listing papers that made outstanding contributions, empirical or theoretical, to the science of cybersecurity in earlier years would also benefit the research community.

1994
Amoroso, E., Merritt, M..  1994.  Composing system integrity using I/O automata. Tenth Annual Computer Security Applications Conference. :34—43.
The I/O automata model of Lynch and Turtle (1987) is summarized and used to formalize several types of system integrity based on the control of transitions to invalid starts. Type-A integrity is exhibited by systems with no invalid initial states and that disallow transitions from valid reachable to invalid states. Type-B integrity is exhibited by systems that disallow externally-controlled transitions from valid reachable to invalid states, Type-C integrity is exhibited by systems that allow locally-controlled or externally-controlled transitions from reachable to invalid states. Strict-B integrity is exhibited by systems that are Type-B but not Type-A. Strict-C integrity is exhibited by systems that are Type-C but not Type-B. Basic results on the closure properties that hold under composition of systems exhibiting these types of integrity are presented in I/O automata-theoretic terms. Specifically, Type-A, Type-B, and Type-C integrity are shown to be composable, whereas Strict-B and Strict-C integrity are shown to not be generally composable. The integrity definitions and compositional results are illustrated using the familiar vending machine example specified as an I/O automaton and composed with a customer environment. The implications of the integrity definitions and compositional results on practical system design are discussed and a research plan for future work is outlined.
2002
Imai, H., Hanaoka, G., Shikata, J., Otsuka, A., Nascimento, A. C..  2002.  Cryptography with information theoretic security. Proceedings of the IEEE Information Theory Workshop. :73–.
Summary form only given. We discuss information-theoretic methods to prove the security of cryptosystems. We study what is called, unconditionally secure (or information-theoretically secure) cryptographic schemes in search for a system that can provide long-term security and that does not impose limits on the adversary's computational power.
2009
Vyetrenko, S., Khosla, A., Ho, T..  2009.  On combining information-theoretic and cryptographic approaches to network coding security against the pollution attack. 2009 Conference Record of the Forty-Third Asilomar Conference on Signals, Systems and Computers. :788–792.
In this paper we consider the pollution attack in network coded systems where network nodes are computationally limited. We consider the combined use of cryptographic signature based security and information theoretic network error correction and propose a fountain-like network error correction code construction suitable for this purpose.
2011
Cortier, Veronique, Warinschi, Bogdan.  2011.  A Composable Computational Soundness Notion. Proceedings of the 18th ACM conference on Computer and communications security. :63–74.

Computational soundness results show that under certain conditions it is possible to conclude computational security whenever symbolic security holds. Unfortunately, each soundness result is usually established for some set of cryptographic primitives and extending the result to encompass new primitives typically requires redoing most of the work. In this paper we suggest a way of getting around this problem. We propose a notion of computational soundness that we term deduction soundness. As for other soundness notions, our definition captures the idea that a computational adversary does not have any more power than a symbolic adversary. However, a key aspect of deduction soundness is that it considers, intrinsically, the use of the primitives in the presence of functions specified by the adversary. As a consequence, the resulting notion is amenable to modular extensions. We prove that a deduction sound implementation of some arbitrary primitives can be extended to include asymmetric encryption and public data-structures (e.g. pairings or list), without repeating the original proof effort. Furthermore, our notion of soundness concerns cryptographic primitives in a way that is independent of any protocol specification language. Nonetheless, we show that deduction soundness leads to computational soundness for languages (or protocols) that satisfy a so called commutation property.

2012
Atkinson, Simon Reay, Walker, David, Beaulne, Kevin, Hossain, Liaquat.  2012.  Cyber – Transparencies, Assurance and Deterrence. 2012 International Conference on Cyber Security. :119–126.
Cyber-has often been considered as a coordination and control, as opposed to collaborative influence, media. This conceptual-design paper, uniquely, builds upon a number of entangled, cross disciplinary research strands – integrating engineering and conflict studies – and a detailed literature review to propose a new paradigm of assurance and deterrence models. We consider an ontology for Cyber-sûréte, which combines both the social trusts necessary for [knowledge &, information] assurance such as collaboration by social influence (CSI) and the technological controls and rules for secure information management referred as coordination by rule and control (CRC). We posit Cyber-sûréte as enabling both a 'safe-to-fail' ecology (in which learning, testing and adaptation can take place) within a fail-safe supervisory control and data acquisition (SCADA type) system, e.g. in a nuclear power plant. Building upon traditional state-based threat analysis, we consider Warning Time and the Threat equation with relation to policies for managing Cyber-Deterrence. We examine how the goods of Cyber-might be galvanised so as to encourage virtuous behaviour and deter and / or dissuade ne'er-do-wells through multiple transparencies. We consider how the Deterrence-escalator may be managed by identifying both weak influence and strong control signals so as to create a more benign and responsive cyber-ecology, in which strengths can be exploited and weaknesses identified. Finally, we consider declaratory / mutual transparencies as opposed to legalistic / controlled transparency.
Slavin, R., Hui Shen, Jianwei Niu.  2012.  Characterizations and boundaries of security requirements patterns. Requirements Patterns (RePa), 2012 IEEE Second International Workshop on. :48-53.

Very often in the software development life cycle, security is applied too late or important security aspects are overlooked. Although the use of security patterns is gaining popularity, the current state of security requirements patterns is such that there is not much in terms of a defining structure. To address this issue, we are working towards defining the important characteristics as well as the boundaries for security requirements patterns in order to make them more effective. By examining an existing general pattern format that describes how security patterns should be structured and comparing it to existing security requirements patterns, we are deriving characterizations and boundaries for security requirements patterns. From these attributes, we propose a defining format. We hope that these can reduce user effort in elicitation and specification of security requirements patterns.

2013
Jim Blythe, University of Southern California, Ross Koppel, University of Pennsylvania, Sean Smith, Dartmouth College.  2013.  Circumvention of Security: Good Users Do Bad Things.

Conventional wisdom is that the textbook view describes reality, and only bad people (not good people trying to get their jobs done) break the rules. And yet it doesn't, and good people circumvent.
 

Published in IEEE Security & Privacy, volume 11, issue 5, September - October 2013.

Robert Zager, John Zager.  2013.  Combat Identification in Cyberspace.

This article discusses how a system of Identification: Friend or Foe (IFF) can be implemented in email to make users less susceptible to phishing attacks.

2014
Manson, Daniel, Pike, Ronald.  2014.  The Case for Depth in Cybersecurity Education. ACM Inroads. 5:47–52.

In his book Outliers, Malcom Gladwell describes the 10,000-Hour Rule, a key to success in any field, as simply a matter of practicing a specific task that can be accomplished with 20 hours of work a week for 10 years [10]. Ongoing changes in technology and national security needs require aspiring excellent cybersecurity professionals to set a goal of 10,000 hours of relevant, hands-on skill development. The education system today is ill prepared to meet the challenge of producing an adequate number of cybersecurity professionals, but programs that use competitions and learning environments that teach depth are filling this void.

 

He, Xiaofan, Dai, Huaiyu, Shen, Wenbo, Ning, Peng.  2014.  Channel Correlation Modeling for Link Signature Security Assessment. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :25:1–25:2.

It is widely accepted that wireless channels decorrelate fast over space, and half a wavelength is the key distance metric used in link signature (LS) for security assurance. However, we believe that this channel correlation model is questionable, and will lead to false sense of security. In this project, we focus on establishing correct modeling of channel correlation so as to facilitate proper guard zone designs for LS security in various wireless environments of interest.

Han, Yujuan, Lu, Wenlian, Xu, Shouhuai.  2014.  Characterizing the Power of Moving Target Defense via Cyber Epidemic Dynamics. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :10:1–10:12.

Moving Target Defense (MTD) can enhance the resilience of cyber systems against attacks. Although there have been many MTD techniques, there is no systematic understanding and quantitative characterization of the power of MTD. In this paper, we propose to use a cyber epidemic dynamics approach to characterize the power of MTD. We define and investigate two complementary measures that are applicable when the defender aims to deploy MTD to achieve a certain security goal. One measure emphasizes the maximum portion of time during which the system can afford to stay in an undesired configuration (or posture), without considering the cost of deploying MTD. The other measure emphasizes the minimum cost of deploying MTD, while accommodating that the system has to stay in an undesired configuration (or posture) for a given portion of time. Our analytic studies lead to algorithms for optimally deploying MTD.

Lei, X., Liao, X., Huang, T., Li, H..  2014.  Cloud Computing Service: the Case of Large Matrix Determinant Computation. Services Computing, IEEE Transactions on. PP:1-1.

Cloud computing paradigm provides an alternative and economical service for resource-constrained clients to perform large-scale data computation. Since large matrix determinant computation (DC) is ubiquitous in the fields of science and engineering, a first step is taken in this paper to design a protocol that enables clients to securely, verifiably, and efficiently outsource DC to a malicious cloud. The main idea to protect the privacy is employing some transformations on the original matrix to get an encrypted matrix which is sent to the cloud; and then transforming the result returned from the cloud to get the correct determinant of the original matrix. Afterwards, a randomized Monte Carlo verification algorithm with one-sided error is introduced, whose superiority in designing inexpensive result verification algorithm for secure outsourcing is well demonstrated. In addition, it is analytically shown that the proposed protocol simultaneously fulfills the goals of correctness, security, robust cheating resistance, and high-efficiency. Extensive theoretical analysis and experimental evaluation also show its high-efficiency and immediate practicability. It is hoped that the proposed protocol can shed light in designing other novel secure outsourcing protocols, and inspire powerful companies and working groups to finish the programming of the demanded all-inclusive scientific computations outsourcing software system. It is believed that such software system can be profitable by means of providing large-scale scientific computation services for so many potential clients.
 

Girma, Anteneh, Garuba, Moses, Goel, Rojini.  2014.  Cloud Computing Vulnerability: DDoS As Its Main Security Threat, and Analysis of IDS As a Solution Model. Proceedings of the 2014 11th International Conference on Information Technology: New Generations. :307–312.

Cloud computing has emerged as an increasingly popular means of delivering IT-enabled business services and a potential technology resource choice for many private and government organizations in today's rapidly changing computing environment. Consequently, as cloud computing technology, functionality and usability expands unique security vulnerabilities and treats requiring timely attention arise continuously. The primary challenge being providing continuous service availability. This paper will address cloud security vulnerability issues, the threats propagated by a distributed denial of service (DDOS) attack on cloud computing infrastructure and also discuss the means and techniques that could detect and prevent the attacks.

Dua, Akshay, Bulusu, Nirupama, Feng, Wu-Chang, Hu, Wen.  2014.  Combating Software and Sybil Attacks to Data Integrity in Crowd-Sourced Embedded Systems. ACM Trans. Embed. Comput. Syst.. 13:154:1–154:19.

Crowd-sourced mobile embedded systems allow people to contribute sensor data, for critical applications, including transportation, emergency response and eHealth. Data integrity becomes imperative as malicious participants can launch software and Sybil attacks modifying the sensing platform and data. To address these attacks, we develop (1) a Trusted Sensing Peripheral (TSP) enabling collection of high-integrity raw or aggregated data, and participation in applications requiring additional modalities; and (2) a Secure Tasking and Aggregation Protocol (STAP) enabling aggregation of TSP trusted readings by untrusted intermediaries, while efficiently detecting fabricators. Evaluations demonstrate that TSP and STAP are practical and energy-efficient.

Ding, Shuai, Yang, Shanlin, Zhang, Youtao, Liang, Changyong, Xia, Chenyi.  2014.  Combining QoS Prediction and Customer Satisfaction Estimation to Solve Cloud Service Trustworthiness Evaluation Problems. Know.-Based Syst.. 56:216–225.

The collection and combination of assessment data in trustworthiness evaluation of cloud service is challenging, notably because QoS value may be missing in offline evaluation situation due to the time-consuming and costly cloud service invocation. Considering the fact that many trustworthiness evaluation problems require not only objective measurement but also subjective perception, this paper designs a novel framework named CSTrust for conducting cloud service trustworthiness evaluation by combining QoS prediction and customer satisfaction estimation. The proposed framework considers how to improve the accuracy of QoS value prediction on quantitative trustworthy attributes, as well as how to estimate the customer satisfaction of target cloud service by taking advantages of the perception ratings on qualitative attributes. The proposed methods are validated through simulations, demonstrating that CSTrust can effectively predict assessment data and release evaluation results of trustworthiness.

Rommel García, Ignacio Algredo-Badillo, Miguel Morales-Sandoval, Claudia Feregrino-Uribe, René Cumplido.  2014.  A compact FPGA-based processor for the Secure Hash Algorithm SHA-256. Computers & Electrical Engineering. 40:194-202.

This work reports an efficient and compact FPGA processor for the SHA-256 algorithm. The novel processor architecture is based on a custom datapath that exploits the reusing of modules, having as main component a 4-input Arithmetic-Logic Unit not previously reported. This ALU is designed as a result of studying the type of operations in the SHA algorithm, their execution sequence and the associated dataflow. The processor hardware architecture was modeled in VHDL and implemented in FPGAs. The results obtained from the implementation in a Virtex5 device demonstrate that the proposed design uses fewer resources achieving higher performance and efficiency, outperforming previous approaches in the literature focused on compact designs, saving around 60% FPGA slices with an increased throughput (Mbps) and efficiency (Mbps/Slice). The proposed SHA processor is well suited for applications like Wi-Fi, TMP (Trusted Mobile Platform), and MTM (Mobile Trusted Module), where the data transfer speed is around 50 Mbps.

40th-year commemorative issue

Thirunarayan, Krishnaprasad, Anantharam, Pramod, Henson, Cory, Sheth, Amit.  2014.  Comparative Trust Management with Applications: Bayesian Approaches Emphasis. Future Gener. Comput. Syst.. 31:182–199.

Trust relationships occur naturally in many diverse contexts such as collaborative systems, e-commerce, interpersonal interactions, social networks, and semantic sensor web. As agents providing content and services become increasingly removed from the agents that consume them, the issue of robust trust inference and update becomes critical. There is a need to find online substitutes for traditional (direct or face-to-face) cues to derive measures of trust, and create efficient and robust systems for managing trust in order to support decision-making. Unfortunately, there is neither a universal notion of trust that is applicable to all domains nor a clear explication of its semantics or computation in many situations. We motivate the trust problem, explain the relevant concepts, summarize research in modeling trust and gleaning trustworthiness, and discuss challenges confronting us. The goal is to provide a comprehensive broad overview of the trust landscape, with the nitty-gritties of a handful of approaches. We also provide details of the theoretical underpinnings and comparative analysis of Bayesian approaches to binary and multi-level trust, to automatically determine trustworthiness in a variety of reputation systems including those used in sensor networks, e-commerce, and collaborative environments. Ultimately, we need to develop expressive trust networks that can be assigned objective semantics.

Perzyk, Marcin, Kochanski, Andrzej, Kozlowski, Jacek, Soroczynski, Artur, Biernacki, Robert.  2014.  Comparison of Data Mining Tools for Significance Analysis of Process Parameters in Applications to Process Fault Diagnosis. Inf. Sci.. 259:380–392.

This paper presents an evaluation of various methodologies used to determine relative significances of input variables in data-driven models. Significance analysis applied to manufacturing process parameters can be a useful tool in fault diagnosis for various types of manufacturing processes. It can also be applied to building models that are used in process control. The relative significances of input variables can be determined by various data mining methods, including relatively simple statistical procedures as well as more advanced machine learning systems. Several methodologies suitable for carrying out classification tasks which are characteristic of fault diagnosis were evaluated and compared from the viewpoint of their accuracy, robustness of results and applicability. Two types of testing data were used: synthetic data with assumed dependencies and real data obtained from the foundry industry. The simple statistical method based on contingency tables revealed the best overall performance, whereas advanced machine learning models, such as ANNs and SVMs, appeared to be of less value.

Sarikaya, Y., Ercetin, O., Koksal, C.E..  2014.  Confidentiality-Preserving Control of Uplink Cellular Wireless Networks Using Hybrid ARQ. Networking, IEEE/ACM Transactions on. PP:1-1.

We consider the problem of cross-layer resource allocation with information-theoretic secrecy for uplink transmissions in time-varying cellular wireless networks. Particularly, each node in an uplink cellular network injects two types of traffic, confidential and open at rates chosen in order to maximize a global utility function while keeping the data queues stable and meeting a constraint on the secrecy outage probability. The transmitting node only knows the distribution of channel gains. Our scheme is based on Hybrid Automatic Repeat Request (HARQ) transmission with incremental redundancy. We prove that our scheme achieves a utility, arbitrarily close to the maximum achievable. Numerical experiments are performed to verify the analytical results and to show the efficacy of the dynamic control algorithm.
 

Ceccarelli, A., Montecchi, L., Brancati, F., Lollini, P., Marguglio, A., Bondavalli, A..  2014.  Continuous and Transparent User Identity Verification for Secure Internet Services. Dependable and Secure Computing, IEEE Transactions on. PP:1-1.

Session management in distributed Internet services is traditionally based on username and password, explicit logouts and mechanisms of user session expiration using classic timeouts. Emerging biometric solutions allow substituting username and password with biometric data during session establishment, but in such an approach still a single verification is deemed sufficient, and the identity of a user is considered immutable during the entire session. Additionally, the length of the session timeout may impact on the usability of the service and consequent client satisfaction. This paper explores promising alternatives offered by applying biometrics in the management of sessions. A secure protocol is defined for perpetual authentication through continuous user verification. The protocol determines adaptive timeouts based on the quality, frequency and type of biometric data transparently acquired from the user. The functional behavior of the protocol is illustrated through Matlab simulations, while model-based quantitative analysis is carried out to assess the ability of the protocol to contrast security attacks exercised by different kinds of attackers. Finally, the current prototype for PCs and Android smartphones is discussed.
 

Ray Essick, University of Illinois at Urbana-Champaign, Ji-Woong Lee, Pennsylvania State University, Geir Dullerud, University of Illinois at Urbana-Champaign.  2014.  Control of Linear Switched Systems with Receding Horizon Modal Information. IEEE Transactions on Automatic Control. 59(9)

We provide an exact solution to two performance problems—one of disturbance attenuation and one of windowed variance minimization—subject to exponential stability. Considered are switched systems, whose parameters come from a finite set and switch according to a language such as that specified by an automaton. The controllers are path-dependent, having finite memory of past plant parameters and finite foreknowledge of future parameters. Exact, convex synthesis conditions for each performance problem are expressed in terms of nested linear matrix inequalities. The resulting semidefinite programming problem may be solved offline to arrive at a suitable controller. A notion of path-by-path performance is introduced for each performance problem, leading to improved system performance. Non-regular switching languages are considered and the results are extended to these languages. Two simple, physically motivated examples are given to demonstrate the application of these results.

Jun Moon, University of Illinois at Urbana-Champaign, Tamer Başar, University of Illinois at Urbana-Champaign.  2014.  Control Over Lossy Networks: A Dynamic Game Approach. American Control Conference (ACC 2014).

Abstract— This paper considers a minimax control (H∞) control) problem for linear time-invariant (LTI) systems where the communication loop is subject to a TCP-like packet drop network. The problem is formulated within the zero-sum dynamic game framework. The packet drop network is governed by two independent Bernoulli processes that model control and measurement packet losses. Under this constraint, we obtain a dynamic output feedback minimax controller. For the infinite-horizon case, we provide necessary and sufficient conditions in terms of the packet loss rates and the H disturbance attenuation parameter under which the minimax controller exists and is able to stabilize the closed-loop system in the mean-square sense. In particular, we show that unlike the corresponding LQG case, these conditions are coupled and therefore cannot be determined independently.