Visible to the public Biblio

Found 1545 results

Filters: First Letter Of Title is S  [Clear All Filters]
2014-09-17
Yu, Xianqing, Ning, Peng, Vouk, Mladen A..  2014.  Securing Hadoop in Cloud. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :26:1–26:2.

Hadoop is a map-reduce implementation that rapidly processes data in parallel. Cloud provides reliability, flexibility, scalability, elasticity and cost saving to customers. Moving Hadoop into Cloud can be beneficial to Hadoop users. However, Hadoop has two vulnerabilities that can dramatically impact its security in a Cloud. The vulnerabilities are its overloaded authentication key, and the lack of fine-grained access control at the data access level. We propose and develop a security enhancement for Cloud-based Hadoop.

Durbeck, Lisa J. K., Athanas, Peter M., Macias, Nicholas J..  2014.  Secure-by-construction Composable Componentry for Network Processing. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :27:1–27:2.

Techniques commonly used for analyzing streaming video, audio, SIGINT, and network transmissions, at less-than-streaming rates, such as data decimation and ad-hoc sampling, can miss underlying structure, trends and specific events held in the data[3]. This work presents a secure-by-construction approach [7] for the upper-end data streams with rates from 10- to 100 Gigabits per second. The secure-by-construction approach strives to produce system security through the composition of individually secure hardware and software components. The proposed network processor can be used not only at data centers but also within networks and onboard embedded systems at the network periphery for a wide range of tasks, including preprocessing and data cleansing, signal encoding and compression, complex event processing, flow analysis, and other tasks related to collecting and analyzing streaming data. Our design employs a four-layer scalable hardware/software stack that can lead to inherently secure, easily constructed specialized high-speed stream processing. This work addresses the following contemporary problems: (1) There is a lack of hardware/software systems providing stream processing and data stream analysis operating at the target data rates; for high-rate streams the implementation options are limited: all-software solutions can't attain the target rates[1]. GPUs and GPGPUs are also infeasible: they were not designed for I/O at 10-100Gbps; they also have asymmetric resources for input and output and thus cannot be pipelined[4, 2], whereas custom chip-based solutions are costly and inflexible to changes, and FPGA-based solutions are historically hard to program[6]; (2) There is a distinct advantage to utilizing high-bandwidth or line-speed analytics to reduce time-to-discovery of information, particularly ones that can be pipelined together to conduct a series of processing tasks or data tests without impeding data rates; (3) There is potentially significant network infrastructure cost savings possible from compact and power-efficient analytic support deployed at the network periphery on the data source or one hop away; (4) There is a need for agile deployment in response to changing objectives; (5) There is an opportunity to constrain designs to use only secure components to achieve their specific objectives. We address these five problems in our stream processor design to provide secure, easily specified processing for low-latency, low-power 10-100Gbps in-line processing on top of a commodity high-end FPGA-based hardware accelerator network processor. With a standard interface a user can snap together various filter blocks, like Legos™, to form a custom processing chain. The overall design is a four-layer solution in which the structurally lowest layer provides the vast computational power to process line-speed streaming packets, and the uppermost layer provides the agility to easily shape the system to the properties of a given application. Current work has focused on design of the two lowest layers, highlighted in the design detail in Figure 1. The two layers shown in Figure 1 are the embeddable portion of the design; these layers, operating at up to 100Gbps, capture both the low- and high frequency components of a signal or stream, analyze them directly, and pass the lower frequency components, residues to the all-software upper layers, Layers 3 and 4; they also optionally supply the data-reduced output up to Layers 3 and 4 for additional processing. Layer 1 is analogous to a systolic array of processors on which simple low-level functions or actions are chained in series[5]. Examples of tasks accomplished at the lowest layer are: (a) check to see if Field 3 of the packet is greater than 5, or (b) count the number of X.75 packets, or (c) select individual fields from data packets. Layer 1 provides the lowest latency, highest throughput processing, analysis and data reduction, formulating raw facts from the stream; Layer 2, also accelerated in hardware and running at full network line rate, combines selected facts from Layer 1, forming a first level of information kernels. Layer 2 is comprised of a number of combiners intended to integrate facts extracted from Layer 1 for presentation to Layer 3. Still resident in FPGA hardware and hardware-accelerated, a Layer 2 combiner is comprised of state logic and soft-core microprocessors. Layer 3 runs in software on a host machine, and is essentially the bridge to the embeddable hardware; this layer exposes an API for the consumption of information kernels to create events and manage state. The generated events and state are also made available to an additional software Layer 4, supplying an interface to traditional software-based systems. As shown in the design detail, network data transitions systolically through Layer 1, through a series of light-weight processing filters that extract and/or modify packet contents. All filters have a similar interface: streams enter from the left, exit the right, and relevant facts are passed upward to Layer 2. The output of the end of the chain in Layer 1 shown in the Figure 1 can be (a) left unconnected (for purely monitoring activities), (b) redirected into the network (for bent pipe operations), or (c) passed to another identical processor, for extended processing on a given stream (scalability).

Szekeres, L., Payer, M., Tao Wei, Song, D..  2013.  SoK: Eternal War in Memory. Security and Privacy (SP), 2013 IEEE Symposium on. :48-62.

Memory corruption bugs in software written in low-level languages like C or C++ are one of the oldest problems in computer security. The lack of safety in these languages allows attackers to alter the program's behavior or take full control over it by hijacking its control flow. This problem has existed for more than 30 years and a vast number of potential solutions have been proposed, yet memory corruption attacks continue to pose a serious threat. Real world exploits show that all currently deployed protections can be defeated. This paper sheds light on the primary reasons for this by describing attacks that succeed on today's systems. We systematize the current knowledge about various protection techniques by setting up a general model for memory corruption attacks. Using this model we show what policies can stop which attacks. The model identifies weaknesses of currently deployed techniques, as well as other proposed protections enforcing stricter policies. We analyze the reasons why protection mechanisms implementing stricter polices are not deployed. To achieve wide adoption, protection mechanisms must support a multitude of features and must satisfy a host of requirements. Especially important is performance, as experience shows that only solutions whose overhead is in reasonable bounds get deployed. A comparison of different enforceable policies helps designers of new protection mechanisms in finding the balance between effectiveness (security) and efficiency. We identify some open research problems, and provide suggestions on improving the adoption of newer techniques.

2014-09-26
Bau, J., Bursztein, E., Gupta, D., Mitchell, J..  2010.  State of the Art: Automated Black-Box Web Application Vulnerability Testing. Security and Privacy (SP), 2010 IEEE Symposium on. :332-345.

Black-box web application vulnerability scanners are automated tools that probe web applications for security vulnerabilities. In order to assess the current state of the art, we obtained access to eight leading tools and carried out a study of: (i) the class of vulnerabilities tested by these scanners, (ii) their effectiveness against target vulnerabilities, and (iii) the relevance of the target vulnerabilities to vulnerabilities found in the wild. To conduct our study we used a custom web application vulnerable to known and projected vulnerabilities, and previous versions of widely used web applications containing known vulnerabilities. Our results show the promise and effectiveness of automated tools, as a group, and also some limitations. In particular, "stored" forms of Cross Site Scripting (XSS) and SQL Injection (SQLI) vulnerabilities are not currently found by many tools. Because our goal is to assess the potential of future research, not to evaluate specific vendors, we do not report comparative data or make any recommendations about purchase of specific tools.

2014-10-24
Yu, Tingting, Srisa-an, Witawas, Rothermel, Gregg.  2014.  SimRT: An Automated Framework to Support Regression Testing for Data Races. Proceedings of the 36th International Conference on Software Engineering. :48–59.

Concurrent programs are prone to various classes of difficult-to-detect faults, of which data races are particularly prevalent. Prior work has attempted to increase the cost-effectiveness of approaches for testing for data races by employing race detection techniques, but to date, no work has considered cost-effective approaches for re-testing for races as programs evolve. In this paper we present SimRT, an automated regression testing framework for use in detecting races introduced by code modifications. SimRT employs a regression test selection technique, focused on sets of program elements related to race detection, to reduce the number of test cases that must be run on a changed program to detect races that occur due to code modifications, and it employs a test case prioritization technique to improve the rate at which such races are detected. Our empirical study of SimRT reveals that it is more efficient and effective for revealing races than other approaches, and that its constituent test selection and prioritization components each contribute to its performance.

Fulton, Nathan.  2012.  Security Through Extensible Type Systems. Proceedings of the 3rd Annual Conference on Systems, Programming, and Applications: Software for Humanity. :107–108.
Researchers interested in security often wish to introduce new primitives into a language. Extensible languages hold promise in such scenarios, but only if the extension mechanism is sufficiently safe and expressive. This paper describes several modifications to an extensible language motivated by end-to-end security concerns.
2015-01-11
Heorhiadi, Victor, Fayaz, SeyedKaveh, Reiter, Michael K., Sekar, Vyas.  2014.  SNIPS: A Software-Defined Approach for Scaling Intrusion Prevention Systems via Offloading. 10th International Conference on Information Systems Security, ICISS 2014. 8880

Growing traffic volumes and the increasing complexity of attacks pose a constant scaling challenge for network intrusion prevention systems (NIPS). In this respect, offloading NIPS processing to compute clusters offers an immediately deployable alternative to expensive hardware upgrades. In practice, however, NIPS offloading is challenging on three fronts in contrast to passive network security functions: (1) NIPS offloading can impact other traffic engineering objectives; (2) NIPS offloading impacts user perceived latency; and (3) NIPS actively change traffic volumes by dropping unwanted traffic. To address these challenges, we present the SNIPS system. We design a formal optimization framework that captures tradeoffs across scalability, network load, and latency. We provide a practical implementation using recent advances in software-defined networking without requiring modifications to NIPS hardware. Our evaluations on realistic topologies show that SNIPS can reduce the maximum load by up to 10× while only increasing the latency by 2%.

2015-01-13
Dong Jin, Illinois Institute of Technology, Yi Ning, Illinois Institute of Technology.  2014.  Securing Industrial Control Systems with a Simulation-based Verification System. ACM SIGSIM Conference on Principles of Advanced Discrete Simulation.

Today’s quality of life is highly dependent on the successful operation of many large-scale industrial control systems. To enhance their protection against cyber-attacks and operational errors, we develop a simulation-based verification framework with cross-layer verification techniques that allow comprehensive analysis of the entire ICS-specific stack, including application, protocol, and network layers.

Work in progress paper.

2015-04-28
Creech, G., Jiankun Hu.  2014.  A Semantic Approach to Host-Based Intrusion Detection Systems Using Contiguousand Discontiguous System Call Patterns. Computers, IEEE Transactions on. 63:807-819.

Host-based anomaly intrusion detection system design is very challenging due to the notoriously high false alarm rate. This paper introduces a new host-based anomaly intrusion detection methodology using discontiguous system call patterns, in an attempt to increase detection rates whilst reducing false alarm rates. The key concept is to apply a semantic structure to kernel level system calls in order to reflect intrinsic activities hidden in high-level programming languages, which can help understand program anomaly behaviour. Excellent results were demonstrated using a variety of decision engines, evaluating the KDD98 and UNM data sets, and a new, modern data set. The ADFA Linux data set was created as part of this research using a modern operating system and contemporary hacking methods, and is now publicly available. Furthermore, the new semantic method possesses an inherent resilience to mimicry attacks, and demonstrated a high level of portability between different operating system versions.

2015-04-30
Vamsi, P.R., Kant, K..  2014.  Sybil attack detection using Sequential Hypothesis Testing in Wireless Sensor Networks. Signal Propagation and Computer Technology (ICSPCT), 2014 International Conference on. :698-702.

Sybil attack poses a serious threat to geographic routing. In this attack, a malicious node attempts to broadcast incorrect location information, identity and secret key information. A Sybil node can tamper its neighboring nodes for the purpose of converting them as malicious. As the amount of Sybil nodes increase in the network, the network traffic will seriously affect and the data packets will never reach to their destinations. To address this problem, researchers have proposed several schemes to detect Sybil attacks. However, most of these schemes assume costly setup such as the use of relay nodes or use of expensive devices and expensive encryption methods to verify the location information. In this paper, the authors present a method to detect Sybil attacks using Sequential Hypothesis Testing. The proposed method has been examined using a Greedy Perimeter Stateless Routing (GPSR) protocol with analysis and simulation. The simulation results demonstrate that the proposed method is robust against detecting Sybil attacks.

Vamsi, P.R., Kant, K..  2014.  Sybil attack detection using Sequential Hypothesis Testing in Wireless Sensor Networks. Signal Propagation and Computer Technology (ICSPCT), 2014 International Conference on. :698-702.

Sybil attack poses a serious threat to geographic routing. In this attack, a malicious node attempts to broadcast incorrect location information, identity and secret key information. A Sybil node can tamper its neighboring nodes for the purpose of converting them as malicious. As the amount of Sybil nodes increase in the network, the network traffic will seriously affect and the data packets will never reach to their destinations. To address this problem, researchers have proposed several schemes to detect Sybil attacks. However, most of these schemes assume costly setup such as the use of relay nodes or use of expensive devices and expensive encryption methods to verify the location information. In this paper, the authors present a method to detect Sybil attacks using Sequential Hypothesis Testing. The proposed method has been examined using a Greedy Perimeter Stateless Routing (GPSR) protocol with analysis and simulation. The simulation results demonstrate that the proposed method is robust against detecting Sybil attacks.

Fawzi, H., Tabuada, P., Diggavi, S..  2014.  Secure Estimation and Control for Cyber-Physical Systems Under Adversarial Attacks. Automatic Control, IEEE Transactions on. 59:1454-1467.

The vast majority of today's critical infrastructure is supported by numerous feedback control loops and an attack on these control loops can have disastrous consequences. This is a major concern since modern control systems are becoming large and decentralized and thus more vulnerable to attacks. This paper is concerned with the estimation and control of linear systems when some of the sensors or actuators are corrupted by an attacker. We give a new simple characterization of the maximum number of attacks that can be detected and corrected as a function of the pair (A,C) of the system and we show in particular that it is impossible to accurately reconstruct the state of a system if more than half the sensors are attacked. In addition, we show how the design of a secure local control loop can improve the resilience of the system. When the number of attacks is smaller than a threshold, we propose an efficient algorithm inspired from techniques in compressed sensing to estimate the state of the plant despite attacks. We give a theoretical characterization of the performance of this algorithm and we show on numerical simulations that the method is promising and allows to reconstruct the state accurately despite attacks. Finally, we consider the problem of designing output-feedback controllers that stabilize the system despite sensor attacks. We show that a principle of separation between estimation and control holds and that the design of resilient output feedback controllers can be reduced to the design of resilient state estimators.

Ben Othman, S., Trad, A., Youssef, H..  2014.  Security architecture for at-home medical care using Wireless Sensor Network. Wireless Communications and Mobile Computing Conference (IWCMC), 2014 International. :304-309.

Distributed wireless sensor network technologies have become one of the major research areas in healthcare industries due to rapid maturity in improving the quality of life. Medical Wireless Sensor Network (MWSN) via continuous monitoring of vital health parameters over a long period of time can enable physicians to make more accurate diagnosis and provide better treatment. The MWSNs provide the options for flexibilities and cost saving to patients and healthcare industries. Medical data sensors on patients produce an increasingly large volume of increasingly diverse real-time data. The transmission of this data through hospital wireless networks becomes a crucial problem, because the health information of an individual is highly sensitive. It must be kept private and secure. In this paper, we propose a security model to protect the transfer of medical data in hospitals using MWSNs. We propose Compressed Sensing + Encryption as a strategy to achieve low-energy secure data transmission in sensor networks.

Yilin Mo, Sinopoli, B..  2015.  Secure Estimation in the Presence of Integrity Attacks. Automatic Control, IEEE Transactions on. 60:1145-1151.

We consider the estimation of a scalar state based on m measurements that can be potentially manipulated by an adversary. The attacker is assumed to have full knowledge about the true value of the state to be estimated and about the value of all the measurements. However, the attacker has limited resources and can only manipulate up to l of the m measurements. The problem is formulated as a minimax optimization, where one seeks to construct an optimal estimator that minimizes the “worst-case” expected cost against all possible manipulations by the attacker. We show that if the attacker can manipulate at least half the measurements (l ≥ m/2), then the optimal worst-case estimator should ignore all measurements and be based solely on the a-priori information. We provide the explicit form of the optimal estimator when the attacker can manipulate less than half the measurements (l <; m/2), which is based on (m2l) local estimators. We further prove that such an estimator can be reduced into simpler forms for two special cases, i.e., either the estimator is symmetric and monotone or m = 2l + 1. Finally we apply the proposed methodology in the case of Gaussian measurements.

Ta-Yuan Liu, Mukherjee, P., Ulukus, S., Shih-Chun Lin, Hong, Y.-W.P..  2014.  Secure DoF of MIMO Rayleigh block fading wiretap channels with No CSI anywhere. Communications (ICC), 2014 IEEE International Conference on. :1959-1964.

We consider the block Rayleigh fading multiple-input multiple-output (MIMO) wiretap channel with no prior channel state information (CSI) available at any of the terminals. The channel gains remain constant in a coherence time of T symbols, and then change to another independent realization. The transmitter, the legitimate receiver and the eavesdropper have nt, nr and ne antennas, respectively. We determine the exact secure degrees of freedom (s.d.o.f.) of this system when T ≥ 2 min(nt, nr). We show that, in this case, the s.d.o.f. is exactly (min(nt, nr) - ne)+(T - min(nt, nr))/T. The first term can be interpreted as the eavesdropper with ne antennas taking away ne antennas from both the transmitter and the legitimate receiver. The second term can be interpreted as a fraction of s.d.o.f. being lost due to the lack of CSI at the legitimate receiver. In particular, the fraction loss, min(nt, nr)/T, can be interpreted as the fraction of channel uses dedicated to training the legitimate receiver for it to learn its own CSI. We prove that this s.d.o.f. can be achieved by employing a constant norm channel input, which can be viewed as a generalization of discrete signalling to multiple dimensions.

Creech, G., Jiankun Hu.  2014.  A Semantic Approach to Host-Based Intrusion Detection Systems Using Contiguousand Discontiguous System Call Patterns. Computers, IEEE Transactions on. 63:807-819.

Host-based anomaly intrusion detection system design is very challenging due to the notoriously high false alarm rate. This paper introduces a new host-based anomaly intrusion detection methodology using discontiguous system call patterns, in an attempt to increase detection rates whilst reducing false alarm rates. The key concept is to apply a semantic structure to kernel level system calls in order to reflect intrinsic activities hidden in high-level programming languages, which can help understand program anomaly behaviour. Excellent results were demonstrated using a variety of decision engines, evaluating the KDD98 and UNM data sets, and a new, modern data set. The ADFA Linux data set was created as part of this research using a modern operating system and contemporary hacking methods, and is now publicly available. Furthermore, the new semantic method possesses an inherent resilience to mimicry attacks, and demonstrated a high level of portability between different operating system versions.

Skopik, F., Settanni, G., Fiedler, R., Friedberg, I..  2014.  Semi-synthetic data set generation for security software evaluation. Privacy, Security and Trust (PST), 2014 Twelfth Annual International Conference on. :156-163.

Threats to modern ICT systems are rapidly changing these days. Organizations are not mainly concerned about virus infestation, but increasingly need to deal with targeted attacks. This kind of attacks are specifically designed to stay below the radar of standard ICT security systems. As a consequence, vendors have begun to ship self-learning intrusion detection systems with sophisticated heuristic detection engines. While these approaches are promising to relax the serious security situation, one of the main challenges is the proper evaluation of such systems under realistic conditions during development and before roll-out. Especially the wide variety of configuration settings makes it hard to find the optimal setup for a specific infrastructure. However, extensive testing in a live environment is not only cumbersome but usually also impacts daily business. In this paper, we therefore introduce an approach of an evaluation setup that consists of virtual components, which imitate real systems and human user interactions as close as possible to produce system events, network flows and logging data of complex ICT service environments. This data is a key prerequisite for the evaluation of modern intrusion detection and prevention systems. With these generated data sets, a system's detection performance can be accurately rated and tuned for very specific settings.

Montague, E., Jie Xu, Chiou, E..  2014.  Shared Experiences of Technology and Trust: An Experimental Study of Physiological Compliance Between Active and Passive Users in Technology-Mediated Collaborative Encounters. Human-Machine Systems, IEEE Transactions on. 44:614-624.

The aim of this study is to examine the utility of physiological compliance (PC) to understand shared experience in a multiuser technological environment involving active and passive users. Common ground is critical for effective collaboration and important for multiuser technological systems that include passive users since this kind of user typically does not have control over the technology being used. An experiment was conducted with 48 participants who worked in two-person groups in a multitask environment under varied task and technology conditions. Indicators of PC were measured from participants' cardiovascular and electrodermal activities. The relationship between these PC indicators and collaboration outcomes, such as performance and subjective perception of the system, was explored. Results indicate that PC is related to group performance after controlling for task/technology conditions. PC is also correlated with shared perceptions of trust in technology among group members. PC is a useful tool for monitoring group processes and, thus, can be valuable for the design of collaborative systems. This study has implications for understanding effective collaboration.

Algarni, A., Yue Xu, Chan, T..  2014.  Social Engineering in Social Networking Sites: The Art of Impersonation. Services Computing (SCC), 2014 IEEE International Conference on. :797-804.

Social networking sites (SNSs), with their large number of users and large information base, seem to be the perfect breeding ground for exploiting the vulnerabilities of people, who are considered the weakest link in security. Deceiving, persuading, or influencing people to provide information or to perform an action that will benefit the attacker is known as "social engineering." Fraudulent and deceptive people use social engineering traps and tactics through SNSs to trick users into obeying them, accepting threats, and falling victim to various crimes such as phishing, sexual abuse, financial abuse, identity theft, and physical crime. Although organizations, researchers, and practitioners recognize the serious risks of social engineering, there is a severe lack of understanding and control of such threats. This may be partly due to the complexity of human behaviors in approaching, accepting, and failing to recognize social engineering tricks. This research aims to investigate the impact of source characteristics on users' susceptibility to social engineering victimization in SNSs, particularly Facebook. Using grounded theory method, we develop a model that explains what and how source characteristics influence Facebook users to judge the attacker as credible.

Aiash, M., Mapp, G., Gemikonakli, O..  2014.  Secure Live Virtual Machines Migration: Issues and Solutions. Advanced Information Networking and Applications Workshops (WAINA), 2014 28th International Conference on. :160-165.

In recent years, there has been a huge trend towards running network intensive applications, such as Internet servers and Cloud-based service in virtual environment, where multiple virtual machines (VMs) running on the same machine share the machine's physical and network resources. In such environment, the virtual machine monitor (VMM) virtualizes the machine's resources in terms of CPU, memory, storage, network and I/O devices to allow multiple operating systems running in different VMs to operate and access the network concurrently. A key feature of virtualization is live migration (LM) that allows transfer of virtual machine from one physical server to another without interrupting the services running in virtual machine. Live migration facilitates workload balancing, fault tolerance, online system maintenance, consolidation of virtual machines etc. However, live migration is still in an early stage of implementation and its security is yet to be evaluated. The security concern of live migration is a major factor for its adoption by the IT industry. Therefore, this paper uses the X.805 security standard to investigate attacks on live virtual machine migration. The analysis highlights the main source of threats and suggests approaches to tackle them. The paper also surveys and compares different proposals in the literature to secure the live migration.

Chiang, R., Rajasekaran, S., Zhang, N., Huang, H..  2014.  Swiper: Exploiting Virtual Machine Vulnerability in Third-Party Clouds with Competition for I/O Resources. Parallel and Distributed Systems, IEEE Transactions on. PP:1-1.

The emerging paradigm of cloud computing, e.g., Amazon Elastic Compute Cloud (EC2), promises a highly flexible yet robust environment for large-scale applications. Ideally, while multiple virtual machines (VM) share the same physical resources (e.g., CPUs, caches, DRAM, and I/O devices), each application should be allocated to an independently managed VM and isolated from one another. Unfortunately, the absence of physical isolation inevitably opens doors to a number of security threats. In this paper, we demonstrate in EC2 a new type of security vulnerability caused by competition between virtual I/O workloads-i.e., by leveraging the competition for shared resources, an adversary could intentionally slow down the execution of a targeted application in a VM that shares the same hardware. In particular, we focus on I/O resources such as hard-drive throughput and/or network bandwidth-which are critical for data-intensive applications. We design and implement Swiper, a framework which uses a carefully designed workload to incur significant delays on the targeted application and VM with minimum cost (i.e., resource consumption). We conduct a comprehensive set of experiments in EC2, which clearly demonstrates that Swiper is capable of significantly slowing down various server applications while consuming a small amount of resources.

Barclay, C..  2014.  Sustainable security advantage in a changing environment: The Cybersecurity Capability Maturity Model (CM2). ITU Kaleidoscope Academic Conference: Living in a converged world - Impossible without standards?, Proceedings of the 2014. :275-282.

With the rapid advancement in technology and the growing complexities in the interaction of these technologies and networks, it is even more important for countries and organizations to gain sustainable security advantage. Security advantage refers to the ability to manage and respond to threats and vulnerabilities with a proactive security posture. This is accomplished through effectively planning, managing, responding to and recovering from threats and vulnerabilities. However not many organizations and even countries, especially in the developing world, have been able to equip themselves with the necessary and sufficient know-how or ability to integrate knowledge and capabilities to achieve security advantage within their environment. Having a structured set of requirements or indicators to aid in progressively attaining different levels of maturity and capabilities is one important method to determine the state of cybersecurity readiness. The research introduces the Cybersecurity Capability Maturity Model (CM2), a 6-step process of progressive development of cybersecurity maturity and knowledge integration that ranges from a state of limited awareness and application of security controls to pervasive optimization of the protection of critical assets.

Guizani, S..  2014.  Security applications challenges of RFID technology and possible countermeasures. Computing, Management and Telecommunications (ComManTel), 2014 International Conference on. :291-297.

Radio Frequency IDentification (RFID) is a technique for speedy and proficient identification system, it has been around for more than 50 years and was initially developed for improving warfare machinery. RFID technology bridges two technologies in the area of Information and Communication Technologies (ICT), namely Product Code (PC) technology and Wireless technology. This broad-based rapidly expanding technology impacts business, environment and society. The operating principle of an RFID system is as follows. The reader starts a communication process by radiating an electromagnetic wave. This wave will be intercepted by the antenna of the RFID tag, placed on the item to be identified. An induced current will be created at the tag and will activate the integrated circuit, enabling it to send back a wave to the reader. The reader redirects information to the host where it will be processed. RFID is used for wide range of applications in almost every field (Health, education, industry, security, management ...). In this review paper, we will focus on agricultural and environmental applications.

Hassen, H., Khemakhem, M..  2014.  A secured distributed OCR system in a pervasive environment with authentication as a service in the Cloud. Multimedia Computing and Systems (ICMCS), 2014 International Conference on. :1200-1205.

In this paper we explore the potential for securing a distributed Arabic Optical Character Recognition (OCR) system via cloud computing technology in a pervasive and mobile environment. The goal of the system is to achieve full accuracy, high speed and security when taking into account large vocabularies and amounts of documents. This issue has been resolved by integrating the recognition process and the security issue with multiprocessing and distributed computing technologies.

Janiuk, J., Macker, A., Graffi, K..  2014.  Secure distributed data structures for peer-to-peer-based social networks. Collaboration Technologies and Systems (CTS), 2014 International Conference on. :396-405.

Online social networks are attracting billions of nowadays, both on a global scale as well as in social enterprise networks. Using distributed hash tables and peer-to-peer technology allows online social networks to be operated securely and efficiently only by using the resources of the user devices, thus alleviating censorship or data misuse by a single network operator. In this paper, we address the challenges that arise in implementing reliably and conveniently to use distributed data structures, such as lists or sets, in such a distributed hash-table-based online social network. We present a secure, distributed list data structure that manages the list entries in several buckets in the distributed hash table. The list entries are authenticated, integrity is maintained and access control for single users and also groups is integrated. The approach for secure distributed lists is also applied for prefix trees and sets, and implemented and evaluated in a peer-to-peer framework for social networks. Evaluation shows that the distributed data structure is convenient and efficient to use and that the requirements on security hold.