Visible to the public Biblio

Found 127 results

Filters: Keyword is Data models  [Clear All Filters]
2017-02-14
M. Wurzenberger, F. Skopik, G. Settanni, R. Fiedler.  2015.  "Beyond gut instincts: Understanding, rating and comparing self-learning IDSs". 2015 International Conference on Cyber Situational Awareness, Data Analytics and Assessment (CyberSA). :1-1.

Today ICT networks are the economy's vital backbone. While their complexity continuously evolves, sophisticated and targeted cyber attacks such as Advanced Persistent Threats (APTs) become increasingly fatal for organizations. Numerous highly developed Intrusion Detection Systems (IDSs) promise to detect certain characteristics of APTs, but no mechanism which allows to rate, compare and evaluate them with respect to specific customer infrastructures is currently available. In this paper, we present BAESE, a system which enables vendor independent and objective rating and comparison of IDSs based on small sets of customer network data.

L. Rivière, J. Bringer, T. H. Le, H. Chabanne.  2015.  "A novel simulation approach for fault injection resistance evaluation on smart cards". 2015 IEEE Eighth International Conference on Software Testing, Verification and Validation Workshops (ICSTW). :1-8.

Physical perturbations are performed against embedded systems that can contain valuable data. Such devices and in particular smart cards are targeted because potential attackers hold them. The embedded system security must hold against intentional hardware failures that can result in software errors. In a malicious purpose, an attacker could exploit such errors to find out secret data or disrupt a transaction. Simulation techniques help to point out fault injection vulnerabilities and come at an early stage in the development process. This paper proposes a generic fault injection simulation tool that has the particularity to embed the injection mechanism into the smart card source code. By its embedded nature, the Embedded Fault Simulator (EFS) allows us to perform fault injection simulations and side-channel analyses simultaneously. It makes it possible to achieve combined attacks, multiple fault attacks and to perform backward analyses. We appraise our approach on real, modern and complex smart card systems under data and control flow fault models. We illustrate the EFS capacities by performing a practical combined attack on an Advanced Encryption Standard (AES) implementation.

C. H. Hsieh, C. M. Lai, C. H. Mao, T. C. Kao, K. C. Lee.  2015.  "AD2: Anomaly detection on active directory log data for insider threat monitoring". 2015 International Carnahan Conference on Security Technology (ICCST). :287-292.

What you see is not definitely believable is not a rare case in the cyber security monitoring. However, due to various tricks of camouflages, such as packing or virutal private network (VPN), detecting "advanced persistent threat"(APT) by only signature based malware detection system becomes more and more intractable. On the other hand, by carefully modeling users' subsequent behaviors of daily routines, probability for one account to generate certain operations can be estimated and used in anomaly detection. To the best of our knowledge so far, a novel behavioral analytic framework, which is dedicated to analyze Active Directory domain service logs and to monitor potential inside threat, is now first proposed in this project. Experiments on real dataset not only show that the proposed idea indeed explores a new feasible direction for cyber security monitoring, but also gives a guideline on how to deploy this framework to various environments.

2015-11-12
Emfinger, W., Karsai, G..  2015.  Modeling Network Medium Access Protocols for Network Quality of Service Analysis. Real-Time Distributed Computing (ISORC), 2015 IEEE 18th International Symposium on. :292-295.

Design-time analysis and verification of distributed real-time embedded systems necessitates the modeling of the time-varying performance of the network and comparing that to application requirements. Earlier work has shown how to build a system network model that abstracted away the network's physical medium and protocols which govern its access and multiplexing. In this work we show how to apply a network medium channel access protocol, such as Time-Division Multiple Access (TDMA), to our network analysis methods and use the results to show that the abstracted model without the explicit model of the protocol is valid.

2015-05-06
Jian Sun, Haitao Liao, Upadhyaya, B.R..  2014.  A Robust Functional-Data-Analysis Method for Data Recovery in Multichannel Sensor Systems. Cybernetics, IEEE Transactions on. 44:1420-1431.

Multichannel sensor systems are widely used in condition monitoring for effective failure prevention of critical equipment or processes. However, loss of sensor readings due to malfunctions of sensors and/or communication has long been a hurdle to reliable operations of such integrated systems. Moreover, asynchronous data sampling and/or limited data transmission are usually seen in multiple sensor channels. To reliably perform fault diagnosis and prognosis in such operating environments, a data recovery method based on functional principal component analysis (FPCA) can be utilized. However, traditional FPCA methods are not robust to outliers and their capabilities are limited in recovering signals with strongly skewed distributions (i.e., lack of symmetry). This paper provides a robust data-recovery method based on functional data analysis to enhance the reliability of multichannel sensor systems. The method not only considers the possibly skewed distribution of each channel of signal trajectories, but is also capable of recovering missing data for both individual and correlated sensor channels with asynchronous data that may be sparse as well. In particular, grand median functions, rather than classical grand mean functions, are utilized for robust smoothing of sensor signals. Furthermore, the relationship between the functional scores of two correlated signals is modeled using multivariate functional regression to enhance the overall data-recovery capability. An experimental flow-control loop that mimics the operation of coolant-flow loop in a multimodular integral pressurized water reactor is used to demonstrate the effectiveness and adaptability of the proposed data-recovery method. The computational results illustrate that the proposed method is robust to outliers and more capable than the existing FPCA-based method in terms of the accuracy in recovering strongly skewed signals. In addition, turbofan engine data are also analyzed to verify the capability of the proposed method in recovering non-skewed signals.
 

Nower, N., Yasuo Tan, Lim, A.O..  2014.  Efficient Temporal and Spatial Data Recovery Scheme for Stochastic and Incomplete Feedback Data of Cyber-physical Systems. Service Oriented System Engineering (SOSE), 2014 IEEE 8th International Symposium on. :192-197.

Feedback loss can severely degrade the overall system performance, in addition, it can affect the control and computation of the Cyber-physical Systems (CPS). CPS hold enormous potential for a wide range of emerging applications including stochastic and time-critical traffic patterns. Stochastic data has a randomness in its nature which make a great challenge to maintain the real-time control whenever the data is lost. In this paper, we propose a data recovery scheme, called the Efficient Temporal and Spatial Data Recovery (ETSDR) scheme for stochastic incomplete feedback of CPS. In this scheme, we identify the temporal model based on the traffic patterns and consider the spatial effect of the nearest neighbor. Numerical results reveal that the proposed ETSDR outperforms both the weighted prediction (WP) and the exponentially weighted moving average (EWMA) algorithm regardless of the increment percentage of missing data in terms of the root mean square error, the mean absolute error, and the integral of absolute error.
 

Castro Marquez, C.I., Strum, M., Wang Jiang Chau.  2014.  A unified sequential equivalence checking approach to verify high-level functionality and protocol specification implementations in RTL designs. Test Workshop - LATW, 2014 15th Latin American. :1-6.

Formal techniques provide exhaustive design verification, but computational margins have an important negative impact on its efficiency. Sequential equivalence checking is an effective approach, but traditionally it has been only applied between circuit descriptions with one-to-one correspondence for states. Applying it between RTL descriptions and high-level reference models requires removing signals, variables and states exclusive of the RTL description so as to comply with the state correspondence restriction. In this paper, we extend a previous formal methodology for RTL verification with high-level models, to check also the signals and protocol implemented in the RTL design. This protocol implementation is compared formally to a description captured from the specification. Thus, we can prove thoroughly the sequential behavior of a design under verification.
 

Jian Sun, Haitao Liao, Upadhyaya, B.R..  2014.  A Robust Functional-Data-Analysis Method for Data Recovery in Multichannel Sensor Systems. Cybernetics, IEEE Transactions on. 44:1420-1431.

Multichannel sensor systems are widely used in condition monitoring for effective failure prevention of critical equipment or processes. However, loss of sensor readings due to malfunctions of sensors and/or communication has long been a hurdle to reliable operations of such integrated systems. Moreover, asynchronous data sampling and/or limited data transmission are usually seen in multiple sensor channels. To reliably perform fault diagnosis and prognosis in such operating environments, a data recovery method based on functional principal component analysis (FPCA) can be utilized. However, traditional FPCA methods are not robust to outliers and their capabilities are limited in recovering signals with strongly skewed distributions (i.e., lack of symmetry). This paper provides a robust data-recovery method based on functional data analysis to enhance the reliability of multichannel sensor systems. The method not only considers the possibly skewed distribution of each channel of signal trajectories, but is also capable of recovering missing data for both individual and correlated sensor channels with asynchronous data that may be sparse as well. In particular, grand median functions, rather than classical grand mean functions, are utilized for robust smoothing of sensor signals. Furthermore, the relationship between the functional scores of two correlated signals is modeled using multivariate functional regression to enhance the overall data-recovery capability. An experimental flow-control loop that mimics the operation of coolant-flow loop in a multimodular integral pressurized water reactor is used to demonstrate the effectiveness and adaptability of the proposed data-recovery method. The computational results illustrate that the proposed method is robust to outliers and more capable than the existing FPCA-based method in terms of the accuracy in recovering strongly skewed signals. In addition, turbofan engine data are also analyzed to verify the capability of the proposed method in recovering non-skewed signals.
 

Chunhui Zhao.  2014.  Fault subspace selection and analysis of relative changes based reconstruction modeling for multi-fault diagnosis. Control and Decision Conference (2014 CCDC), The 26th Chinese. :235-240.

Online fault diagnosis has been a crucial task for industrial processes. Reconstruction-based fault diagnosis has been drawing special attentions as a good alternative to the traditional contribution plot. It identifies the fault cause by finding the specific fault subspace that can well eliminate alarming signals from a bunch of alternatives that have been prepared based on historical fault data. However, in practice, the abnormality may result from the joint effects of multiple faults, which thus can not be well corrected by single fault subspace archived in the historical fault library. In the present work, an aggregative reconstruction-based fault diagnosis strategy is proposed to handle the case where multiple fault causes jointly contribute to the abnormal process behaviors. First, fault subspaces are extracted based on historical fault data in two different monitoring subspaces where analysis of relative changes is taken to enclose the major fault effects that are responsible for different alarming monitoring statistics. Then, a fault subspace selection strategy is developed to analyze the combinatorial fault nature which will sort and select the informative fault subspaces that are most likely to be responsible for the concerned abnormalities. Finally, an aggregative fault subspace is calculated by combining the selected fault subspaces which represents the joint effects from multiple faults and works as the final reconstruction model for online fault diagnosis. Theoretical support is framed and the related statistical characteristics are analyzed. Its feasibility and performance are illustrated with simulated multi-faults using data from the Tennessee Eastman (TE) benchmark process.
 

Mokhtar, B., Eltoweissy, M..  2014.  Towards a Data Semantics Management System for Internet Traffic. New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on. :1-5.

Although current Internet operations generate voluminous data, they remain largely oblivious of traffic data semantics. This poses many inefficiencies and challenges due to emergent or anomalous behavior impacting the vast array of Internet elements such as services and protocols. In this paper, we propose a Data Semantics Management System (DSMS) for learning Internet traffic data semantics to enable smarter semantics- driven networking operations. We extract networking semantics and build and utilize a dynamic ontology of network concepts to better recognize and act upon emergent or abnormal behavior. Our DSMS utilizes: (1) Latent Dirichlet Allocation algorithm (LDA) for latent features extraction and semantics reasoning; (2) big tables as a cloud-like data storage technique to maintain large-scale data; and (3) Locality Sensitive Hashing algorithm (LSH) for reducing data dimensionality. Our preliminary evaluation using real Internet traffic shows the efficacy of DSMS for learning behavior of normal and abnormal traffic data and for accurately detecting anomalies at low cost.
 

Sung-Hwan Ahn, Nam-Uk Kim, Tai-Myoung Chung.  2014.  Big data analysis system concept for detecting unknown attacks. Advanced Communication Technology (ICACT), 2014 16th International Conference on. :269-272.

Recently, threat of previously unknown cyber-attacks are increasing because existing security systems are not able to detect them. Past cyber-attacks had simple purposes of leaking personal information by attacking the PC or destroying the system. However, the goal of recent hacking attacks has changed from leaking information and destruction of services to attacking large-scale systems such as critical infrastructures and state agencies. In the other words, existing defence technologies to counter these attacks are based on pattern matching methods which are very limited. Because of this fact, in the event of new and previously unknown attacks, detection rate becomes very low and false negative increases. To defend against these unknown attacks, which cannot be detected with existing technology, we propose a new model based on big data analysis techniques that can extract information from a variety of sources to detect future attacks. We expect our model to be the basis of the future Advanced Persistent Threat(APT) detection and prevention system implementations.

2015-05-05
Mukkamala, R.R., Hussain, A., Vatrapu, R..  2014.  Towards a Set Theoretical Approach to Big Data Analytics. Big Data (BigData Congress), 2014 IEEE International Congress on. :629-636.

Formal methods, models and tools for social big data analytics are largely limited to graph theoretical approaches such as social network analysis (SNA) informed by relational sociology. There are no other unified modeling approaches to social big data that integrate the conceptual, formal and software realms. In this paper, we first present and discuss a theory and conceptual model of social data. Second, we outline a formal model based on set theory and discuss the semantics of the formal model with a real-world social data example from Facebook. Third, we briefly present and discuss the Social Data Analytics Tool (SODATO) that realizes the conceptual model in software and provisions social data analysis based on the conceptual and formal models. Fourth and last, based on the formal model and sentiment analysis of text, we present a method for profiling of artifacts and actors and apply this technique to the data analysis of big social data collected from Facebook page of the fast fashion company, H&M.
 

Okathe, T., Heydari, S.S., Sood, V., El-khatib, K..  2014.  Unified multi-critical infrastructure communication architecture. Communications (QBSC), 2014 27th Biennial Symposium on. :178-183.

Recent events have brought to light the increasingly intertwined nature of modern infrastructures. As a result much effort is being put towards protecting these vital infrastructures without which modern society suffers dire consequences. These infrastructures, due to their intricate nature, behave in complex ways. Improving their resilience and understanding their behavior requires a collaborative effort between the private sector that operates these infrastructures and the government sector that regulates them. This collaboration in the form of information sharing requires a new type of information network whose goal is in two parts to enable infrastructure operators share status information among interdependent infrastructure nodes and also allow for the sharing of vital information concerning threats and other contingencies in the form of alerts. A communication model that meets these requirements while maintaining flexibility and scalability is presented in this paper.
 

Hussain, A., Faber, T., Braden, R., Benzel, T., Yardley, T., Jones, J., Nicol, D.M., Sanders, W.H., Edgar, T.W., Carroll, T.E. et al..  2014.  Enabling Collaborative Research for Security and Resiliency of Energy Cyber Physical Systems. Distributed Computing in Sensor Systems (DCOSS), 2014 IEEE International Conference on. :358-360.

The University of Illinois at Urbana Champaign (Illinois), Pacific Northwest National Labs (PNNL), and the University of Southern California Information Sciences Institute (USC-ISI) consortium is working toward providing tools and expertise to enable collaborative research to improve security and resiliency of cyber physical systems. In this extended abstract we discuss the challenges and the solution space. We demonstrate the feasibility of some of the proposed components through a wide-area situational awareness experiment for the power grid across the three sites.
 

SHAR, L., Briand, L., Tan, H..  2014.  Web Application Vulnerability Prediction using Hybrid Program Analysis and Machine Learning. Dependable and Secure Computing, IEEE Transactions on. PP:1-1.

Due to limited time and resources, web software engineers need support in identifying vulnerable code. A practical approach to predicting vulnerable code would enable them to prioritize security auditing efforts. In this paper, we propose using a set of hybrid (static+dynamic) code attributes that characterize input validation and input sanitization code patterns and are expected to be significant indicators of web application vulnerabilities. Because static and dynamic program analyses complement each other, both techniques are used to extract the proposed attributes in an accurate and scalable way. Current vulnerability prediction techniques rely on the availability of data labeled with vulnerability information for training. For many real world applications, past vulnerability data is often not available or at least not complete. Hence, to address both situations where labeled past data is fully available or not, we apply both supervised and semi-supervised learning when building vulnerability predictors based on hybrid code attributes. Given that semi-supervised learning is entirely unexplored in this domain, we describe how to use this learning scheme effectively for vulnerability prediction. We performed empirical case studies on seven open source projects where we built and evaluated supervised and semi-supervised models. When cross validated with fully available labeled data, the supervised models achieve an average of 77 percent recall and 5 percent probability of false alarm for predicting SQL injection, cross site scripting, remote code execution and file inclusion vulnerabilities. With a low amount of labeled data, when compared to the supervised model, the semi-supervised model showed an average improvement of 24 percent higher recall and 3 percent lower probability of false alarm, thus suggesting semi-supervised learning may be a preferable solution for many real world applications where vulnerability data is missing.
 

Wenmin Xiao, Jianhua Sun, Hao Chen, Xianghua Xu.  2014.  Preventing Client Side XSS with Rewrite Based Dynamic Information Flow. Parallel Architectures, Algorithms and Programming (PAAP), 2014 Sixth International Symposium on. :238-243.

This paper presents the design and implementation of an information flow tracking framework based on code rewrite to prevent sensitive information leaks in browsers, combining the ideas of taint and information flow analysis. Our system has two main processes. First, it abstracts the semantic of JavaScript code and converts it to a general form of intermediate representation on the basis of JavaScript abstract syntax tree. Second, the abstract intermediate representation is implemented as a special taint engine to analyze tainted information flow. Our approach can ensure fine-grained isolation for both confidentiality and integrity of information. We have implemented a proof-of-concept prototype, named JSTFlow, and have deployed it as a browser proxy to rewrite web applications at runtime. The experiment results show that JSTFlow can guarantee the security of sensitive data and detect XSS attacks with about 3x performance overhead. Because it does not involve any modifications to the target system, our system is readily deployable in practice.
 

Miloslavskaya, N., Senatorov, M., Tolstoy, A., Zapechnikov, S..  2014.  Information Security Maintenance Issues for Big Security-Related Data. Future Internet of Things and Cloud (FiCloud), 2014 International Conference on. :361-366.

The need to protect big data, particularly those relating to information security (IS) maintenance (ISM) of an enterprise's IT infrastructure, is shown. A worldwide experience of addressing big data ISM issues is briefly summarized and a big data protection problem statement is formulated. An infrastructure for big data ISM is proposed. New applications areas for big data IT after addressing ISM issues are listed in conclusion.
 

Singh, S., Sharma, S..  2014.  Improving security mechanism to access HDFS data by mobile consumers using middleware-layer framework. Computing, Communication and Networking Technologies (ICCCNT), 2014 International Conference on. :1-7.

Revolution in the field of technology leads to the development of cloud computing which delivers on-demand and easy access to the large shared pools of online stored data, softwares and applications. It has changed the way of utilizing the IT resources but at the compromised cost of security breaches as well such as phishing attacks, impersonation, lack of confidentiality and integrity. Thus this research work deals with the core problem of providing absolute security to the mobile consumers of public cloud to improve the mobility of user's, accessing data stored on public cloud securely using tokens without depending upon the third party to generate them. This paper presents the approach of simplifying the process of authenticating and authorizing the mobile user's by implementing middleware-centric framework called MiLAMob model with the huge online data storage system i.e. HDFS. It allows the consumer's to access the data from HDFS via mobiles or through the social networking sites eg. facebook, gmail, yahoo etc using OAuth 2.0 protocol. For authentication, the tokens are generated using one-time password generation technique and then encrypting them using AES method. By implementing the flexible user based policies and standards, this model improves the authorization process.

2015-05-04
Wenqun Xiu, Xiaoming Li.  2014.  The design of cybercrime spatial analysis system. Information Science and Technology (ICIST), 2014 4th IEEE International Conference on. :132-135.

Artificial monitoring is no longer able to match the rapid growth of cybercrime, it is in great need to develop a new spatial analysis technology which allows emergency events to get rapidly and accurately locked in real environment, furthermore, to establish correlative analysis model for cybercrime prevention strategy. On the other hand, Geography information system has been changed virtually in data structure, coordinate system and analysis model due to the “uncertainty and hyper-dimension” characteristics of network object and behavior. In this paper, the spatial rules of typical cybercrime are explored on base of GIS with Internet searching and IP tracking technology: (1) Setup spatial database through IP searching based on criminal evidence. (2)Extend GIS data-structure and spatial models, add network dimension and virtual attribution to realize dynamic connection between cyber and real space. (3)Design cybercrime monitoring and prevention system to discover the cyberspace logics based on spatial analysis.
 

Yuying Wang, Xingshe Zhou.  2014.  Spatio-temporal semantic enhancements for event model of cyber-physical systems. Signal Processing, Communications and Computing (ICSPCC), 2014 IEEE International Conference on. :813-818.

The newly emerging cyber-physical systems (CPS) discover events from multiple, distributed sources with multiple levels of detail and heterogeneous data format, which may not be compare and integrate, and turn to hardly combined determination for action. While existing efforts have mainly focused on investigating a uniform CPS event representation with spatio-temporal attributes, in this paper we propose a new event model with two-layer structure, Basic Event Model (BEM) and Extended Information Set (EIS). A BEM could be extended with EIS by semantic adaptor for spatio-temporal and other attribution enhancement. In particular, we define the event process functions, like event attribution extraction and composition determination, for CPS action trigger exploit the Complex Event Process (CEP) engine Esper. Examples show that such event model provides several advantages in terms of extensibility, flexibility and heterogeneous support, and lay the foundations of event-based system design in CPS.
 

Pratanwanich, N., Lio, P..  2014.  Who Wrote This? Textual Modeling with Authorship Attribution in Big Data Data Mining Workshop (ICDMW), 2014 IEEE International Conference on. :645-652.

By representing large corpora with concise and meaningful elements, topic-based generative models aim to reduce the dimension and understand the content of documents. Those techniques originally analyze on words in the documents, but their extensions currently accommodate meta-data such as authorship information, which has been proved useful for textual modeling. The importance of learning authorship is to extract author interests and assign authors to anonymous texts. Author-Topic (AT) model, an unsupervised learning technique, successfully exploits authorship information to model both documents and author interests using topic representations. However, the AT model simplifies that each author has equal contribution on multiple-author documents. To overcome this limitation, we assumes that authors give different degrees of contributions on a document by using a Dirichlet distribution. This automatically transforms the unsupervised AT model to Supervised Author-Topic (SAT) model, which brings a novelty of authorship prediction on anonymous texts. The SAT model outperforms the AT model for identifying authors of documents written by either single authors or multiple authors with a better Receiver Operating Characteristic (ROC) curve and a significantly higher Area Under Curve (AUC). The SAT model not only achieves competitive performance to state-of-the-art techniques e.g. Random forests but also maintains the characteristics of the unsupervised models for information discovery i.e. Word distributions of topics, author interests, and author contributions.
 

Ya Zhang, Yi Wei, Jianbiao Ren.  2014.  Multi-touch Attribution in Online Advertising with Survival Theory. Data Mining (ICDM), 2014 IEEE International Conference on. :687-696.

Multi-touch attribution, which allows distributing the credit to all related advertisements based on their corresponding contributions, has recently become an important research topic in digital advertising. Traditionally, rule-based attribution models have been used in practice. The drawback of such rule-based models lies in the fact that the rules are not derived form the data but only based on simple intuition. With the ever enhanced capability to tracking advertisement and users' interaction with the advertisement, data-driven multi-touch attribution models, which attempt to infer the contribution from user interaction data, become an important research direction. We here propose a new data-driven attribution model based on survival theory. By adopting a probabilistic framework, one key advantage of the proposed model is that it is able to remove the presentation biases inherit to most of the other attribution models. In addition to model the attribution, the proposed model is also able to predict user's 'conversion' probability. We validate the proposed method with a real-world data set obtained from a operational commercial advertising monitoring company. Experiment results have shown that the proposed method is quite promising in both conversion prediction and attribution.

2015-05-01
Thilakanathan, D., Calvo, R.A., Shiping Chen, Nepal, S., Dongxi Liu, Zic, J..  2014.  Secure Multiparty Data Sharing in the Cloud Using Hardware-Based TPM Devices. Cloud Computing (CLOUD), 2014 IEEE 7th International Conference on. :224-231.

The trend towards Cloud computing infrastructure has increased the need for new methods that allow data owners to share their data with others securely taking into account the needs of multiple stakeholders. The data owner should be able to share confidential data while delegating much of the burden of access control management to the Cloud and trusted enterprises. The lack of such methods to enhance privacy and security may hinder the growth of cloud computing. In particular, there is a growing need to better manage security keys of data shared in the Cloud. BYOD provides a first step to enabling secure and efficient key management, however, the data owner cannot guarantee that the data consumers device itself is secure. Furthermore, in current methods the data owner cannot revoke a particular data consumer or group efficiently. In this paper, we address these issues by incorporating a hardware-based Trusted Platform Module (TPM) mechanism called the Trusted Extension Device (TED) together with our security model and protocol to allow stronger privacy of data compared to software-based security protocols. We demonstrate the concept of using TED for stronger protection and management of cryptographic keys and how our secure data sharing protocol will allow a data owner (e.g, author) to securely store data via untrusted Cloud services. Our work prevents keys to be stolen by outsiders and/or dishonest authorised consumers, thus making it particularly attractive to be implemented in a real-world scenario.

2015-04-30
Mingqiang Li, Lee, P.P.C..  2014.  Toward I/O-efficient protection against silent data corruptions in RAID arrays. Mass Storage Systems and Technologies (MSST), 2014 30th Symposium on. :1-12.

Although RAID is a well-known technique to protect data against disk errors, it is vulnerable to silent data corruptions that cannot be detected by disk drives. Existing integrity protection schemes designed for RAID arrays often introduce high I/O overhead. Our key insight is that by properly designing an integrity protection scheme that adapts to the read/write characteristics of storage workloads, the I/O overhead can be significantly mitigated. In view of this, this paper presents a systematic study on I/O-efficient integrity protection against silent data corruptions in RAID arrays. We formalize an integrity checking model, and justify that a large proportion of disk reads can be checked with simpler and more I/O-efficient integrity checking mechanisms. Based on this integrity checking model, we construct two integrity protection schemes that provide complementary performance advantages for storage workloads with different user write sizes. We further propose a quantitative method for choosing between the two schemes in real-world scenarios. Our trace-driven simulation results show that with the appropriate integrity protection scheme, we can reduce the I/O overhead to below 15%.

Riveiro, M., Lebram, M., Warston, H..  2014.  On visualizing threat evaluation configuration processes: A design proposal. Information Fusion (FUSION), 2014 17th International Conference on. :1-8.

Threat evaluation is concerned with estimating the intent, capability and opportunity of detected objects in relation to our own assets in an area of interest. To infer whether a target is threatening and to which degree is far from a trivial task. Expert operators have normally to their aid different support systems that analyze the incoming data and provide recommendations for actions. Since the ultimate responsibility lies in the operators, it is crucial that they trust and know how to configure and use these systems, as well as have a good understanding of their inner workings, strengths and limitations. To limit the negative effects of inadequate cooperation between the operators and their support systems, this paper presents a design proposal that aims at making the threat evaluation process more transparent. We focus on the initialization, configuration and preparation phases of the threat evaluation process, supporting the user in the analysis of the behavior of the system considering the relevant parameters involved in the threat estimations. For doing so, we follow a known design process model and we implement our suggestions in a proof-of-concept prototype that we evaluate with military expert system designers.