Visible to the public Biblio

Found 1189 results

Filters: First Letter Of Last Name is S  [Clear All Filters]
A B C D E F G H I J K L M N O P Q R [S] T U V W X Y Z   [Show ALL]
S
Schäfer, Matthias, Fuchs, Markus, Strohmeier, Martin, Engel, Markus, Liechti, Marc, Lenders, Vincent.  2019.  BlackWidow: Monitoring the Dark Web for Cyber Security Information. 2019 11th International Conference on Cyber Conflict (CyCon). 900:1—21.

The Dark Web, a conglomerate of services hidden from search engines and regular users, is used by cyber criminals to offer all kinds of illegal services and goods. Multiple Dark Web offerings are highly relevant for the cyber security domain in anticipating and preventing attacks, such as information about zero-day exploits, stolen datasets with login information, or botnets available for hire. In this work, we analyze and discuss the challenges related to information gathering in the Dark Web for cyber security intelligence purposes. To facilitate information collection and the analysis of large amounts of unstructured data, we present BlackWidow, a highly automated modular system that monitors Dark Web services and fuses the collected data in a single analytics framework. BlackWidow relies on a Docker-based micro service architecture which permits the combination of both preexisting and customized machine learning tools. BlackWidow represents all extracted data and the corresponding relationships extracted from posts in a large knowledge graph, which is made available to its security analyst users for search and interactive visual exploration. Using BlackWidow, we conduct a study of seven popular services on the Deep and Dark Web across three different languages with almost 100,000 users. Within less than two days of monitoring time, BlackWidow managed to collect years of relevant information in the areas of cyber security and fraud monitoring. We show that BlackWidow can infer relationships between authors and forums and detect trends for cybersecurity-related topics. Finally, we discuss exemplary case studies surrounding leaked data and preparation for malicious activity.

Schäfer, Steven, Schneider, Sigurd, Smolka, Gert.  2016.  Axiomatic Semantics for Compiler Verification. Proceedings of the 5th ACM SIGPLAN Conference on Certified Programs and Proofs. :188–196.

Based on constructive type theory, we study two idealized imperative languages GC and IC and verify the correctness of a compiler from GC to IC. GC is a guarded command language with underspecified execution order defined with an axiomatic semantics. IC is a deterministic low-level language with linear sequential composition and lexically scoped gotos defined with a small-step semantics. We characterize IC with an axiomatic semantics and prove that the compiler from GC to IC preserves specifications. The axiomatic semantics we consider model total correctness and map programs to continuous predicate transformers. We define the axiomatic semantics of GC and IC with elementary inductive predicates and show that the predicate transformer described by a program can be obtained compositionally by recursion on the syntax of the program using a fixed point operator for loops and continuations. We also show that two IC programs are contextually equivalent if and only if their predicate transformers are equivalent.

Schear, Nabil, Cable, II, Patrick T., Moyer, Thomas M., Richard, Bryan, Rudd, Robert.  2016.  Bootstrapping and Maintaining Trust in the Cloud. Proceedings of the 32Nd Annual Conference on Computer Security Applications. :65–77.

Today's infrastructure as a service (IaaS) cloud environments rely upon full trust in the provider to secure applications and data. Cloud providers do not offer the ability to create hardware-rooted cryptographic identities for IaaS cloud resources or sufficient information to verify the integrity of systems. Trusted computing protocols and hardware like the TPM have long promised a solution to this problem. However, these technologies have not seen broad adoption because of their complexity of implementation, low performance, and lack of compatibility with virtualized environments. In this paper we introduce keylime, a scalable trusted cloud key management system. keylime provides an end-to-end solution for both bootstrapping hardware rooted cryptographic identities for IaaS nodes and for system integrity monitoring of those nodes via periodic attestation. We support these functions in both bare-metal and virtualized IaaS environments using a virtual TPM. keylime provides a clean interface that allows higher level security services like disk encryption or configuration management to leverage trusted computing without being trusted computing aware. We show that our bootstrapping protocol can derive a key in less than two seconds, we can detect system integrity violations in as little as 110ms, and that keylime can scale to thousands of IaaS cloud nodes.

Scheffer, V., Ipach, H., Becker, C..  2019.  Distribution Grid State Assessment for Control Reserve Provision Using Boundary Load Flow. 2019 IEEE Milan PowerTech. :1—6.

With the increasing expansion of wind and solar power plants, these technologies will also have to contribute control reserve to guarantee frequency stability within the next couple of years. In order to maintain the security of supply at the same level in the future, it must be ensured that wind and solar power plants are able to feed in electricity into the distribution grid without bottlenecks when activated. The present work presents a grid state assessment, which takes into account the special features of the control reserve supply. The identification of a future grid state, which is necessary for an ex ante evaluation, poses the challenge of forecasting loads. The Boundary Load Flow method takes load uncertainties into account and is used to estimate a possible interval for all grid parameters. Grid congestions can thus be detected preventively and suppliers of control reserve can be approved or excluded. A validation in combination with an exemplary application shows the feasibility of the overall methodology.

Scheitle, Q., Gasser, O., Rouhi, M., Carle, G..  2017.  Large-Scale Classification of IPv6-IPv4 Siblings with Variable Clock Skew. 2017 Network Traffic Measurement and Analysis Conference (TMA). :1–9.

Linking the growing IPv6 deployment to existing IPv4 addresses is an interesting field of research, be it for network forensics, structural analysis, or reconnaissance. In this work, we focus on classifying pairs of server IPv6 and IPv4 addresses as siblings, i.e., running on the same machine. Our methodology leverages active measurements of TCP timestamps and other network characteristics, which we measure against a diverse ground truth of 682 hosts. We define and extract a set of features, including estimation of variable (opposed to constant) remote clock skew. On these features, we train a manually crafted algorithm as well as a machine-learned decision tree. By conducting several measurement runs and training in cross-validation rounds, we aim to create models that generalize well and do not overfit our training data. We find both models to exceed 99% precision in train and test performance. We validate scalability by classifying 149k siblings in a large-scale measurement of 371k sibling candidates. We argue that this methodology, thoroughly cross-validated and likely to generalize well, can aid comparative studies of IPv6 and IPv4 behavior in the Internet. Striving for applicability and replicability, we release ready-to-use source code and raw data from our study.

Scheitle, Quirin, Gasser, Oliver, Nolte, Theodor, Amann, Johanna, Brent, Lexi, Carle, Georg, Holz, Ralph, Schmidt, Thomas C., Wählisch, Matthias.  2018.  The Rise of Certificate Transparency and Its Implications on the Internet Ecosystem. Proceedings of the Internet Measurement Conference 2018. :343-349.

In this paper, we analyze the evolution of Certificate Transparency (CT) over time and explore the implications of exposing certificate DNS names from the perspective of security and privacy. We find that certificates in CT logs have seen exponential growth. Website support for CT has also constantly increased, with now 33% of established connections supporting CT. With the increasing deployment of CT, there are also concerns of information leakage due to all certificates being visible in CT logs. To understand this threat, we introduce a CT honeypot and show that data from CT logs is being used to identify targets for scanning campaigns only minutes after certificate issuance. We present and evaluate a methodology to learn and validate new subdomains from the vast number of domains extracted from CT logged certificates.

Schellenberg, Falk, Gnad, Dennis R. E., Moradi, Amir, Tahoori, Mehdi B..  2018.  Remote Inter-chip Power Analysis Side-channel Attacks at Board-level. Proceedings of the International Conference on Computer-Aided Design. :114:1–114:7.
The current practice in board-level integration is to incorporate chips and components from numerous vendors. A fully trusted supply chain for all used components and chipsets is an important, yet extremely difficult to achieve, prerequisite to validate a complete board-level system for safe and secure operation. An increasing risk is that most chips nowadays run software or firmware, typically updated throughout the system lifetime, making it practically impossible to validate the full system at every given point in the manufacturing, integration and operational life cycle. This risk is elevated in devices that run 3rd party firmware. In this paper we show that an FPGA used as a common accelerator in various boards can be reprogrammed by software to introduce a sensor, suitable as a remote power analysis side-channel attack vector at the board-level. We show successful power analysis attacks from one FPGA on the board to another chip implementing RSA and AES cryptographic modules. Since the sensor is only mapped through firmware, this threat is very hard to detect, because data can be exfiltrated without requiring inter-chip communication between victim and attacker. Our results also prove the potential vulnerability in which any untrusted chip on the board can launch such attacks on the remaining system.
Scherzinger, Stefanie, Seifert, Christin, Wiese, Lena.  2019.  The Best of Both Worlds: Challenges in Linking Provenance and Explainability in Distributed Machine Learning. 2019 IEEE 39th International Conference on Distributed Computing Systems (ICDCS). :1620–1629.
Machine learning experts prefer to think of their input as a single, homogeneous, and consistent data set. However, when analyzing large volumes of data, the entire data set may not be manageable on a single server, but must be stored on a distributed file system instead. Moreover, with the pressing demand to deliver explainable models, the experts may no longer focus on the machine learning algorithms in isolation, but must take into account the distributed nature of the data stored, as well as the impact of any data pre-processing steps upstream in their data analysis pipeline. In this paper, we make the point that even basic transformations during data preparation can impact the model learned, and that this is exacerbated in a distributed setting. We then sketch our vision of end-to-end explainability of the model learned, taking the pre-processing into account. In particular, we point out the potentials of linking the contributions of research on data provenance with the efforts on explainability in machine learning. In doing so, we highlight pitfalls we may experience in a distributed system on the way to generating more holistic explanations for our machine learning models.
Schiavone, E., Ceccarelli, A., Bondavalli, A..  2017.  Continuous Biometric Verification for Non-Repudiation of Remote Services. Proceedings of the 12th International Conference on Availability, Reliability and Security. :4:1–4:10.
As our society massively relies on ICT, security services are becoming essential to protect users and entities involved. Amongst such services, non-repudiation provides evidences of actions, protects against their denial, and helps solving disputes between parties. For example, it prevents denial of past behaviors as having sent or received messages. Noteworthy, if the information flow is continuous, evidences should be produced for the entirety of the flow and not only at specific points. Further, non-repudiation should be guaranteed by mechanisms that do not reduce the usability of the system or application. To meet these challenges, in this paper, we propose two solutions for non-repudiation of remote services based on multi-biometric continuous authentication. We present an application scenario that discusses how users and service providers are protected with such solutions. We also discuss the technological readiness of biometrics for non-repudiation services: the outcome is that, under specific assumptions, it is actually ready.
Schiefer, G., Gabel, M., Mechler, J., Schoknecht, A., Citak, M..  2017.  Security in a Distributed Key Management Approach. 2017 IEEE 30th International Symposium on Computer-Based Medical Systems (CBMS). :816–821.

Cloud computing offers many advantages as flexibility or resource efficiency and can significantly reduce costs. However, when sensitive data is outsourced to a cloud provider, classified records can leak. To protect data owners and application providers from a privacy breach data must be encrypted before it is uploaded. In this work, we present a distributed key management scheme that handles user-specific keys in a single-tenant scenario. The underlying database is encrypted and the secret key is split into parts and only reconstructed temporarily in memory. Our scheme distributes shares of the key to the different entities. We address bootstrapping, key recovery, the adversary model and the resulting security guarantees.

Schild, Christopher-J., Schultz, Simone.  2016.  Linking Deutsche Bundesbank Company Data Using Machine-Learning-Based Classification: Extended Abstract. Proceedings of the Second International Workshop on Data Science for Macro-Modeling. :10:1–10:3.

We present a process of linking various Deutsche Bundesbank datasources on companies based on a semi-automatic classification. The linkage process involves data cleaning and harmonization, blocking, construction of comparison features, as well as training and testing a statistical classification model on a "ground-truth" subset of known matches and non-matches. The evaluation of our method shows that the process limits the need for manual classifications to a small percentage of ambiguously classified match candidates.

Schiliro, F., Moustafa, N., Beheshti, A..  2020.  Cognitive Privacy: AI-enabled Privacy using EEG Signals in the Internet of Things. 2020 IEEE 6th International Conference on Dependability in Sensor, Cloud and Big Data Systems and Application (DependSys). :73—79.

With the advent of Industry 4.0, the Internet of Things (IoT) and Artificial Intelligence (AI), smart entities are now able to read the minds of users via extracting cognitive patterns from electroencephalogram (EEG) signals. Such brain data may include users' experiences, emotions, motivations, and other previously private mental and psychological processes. Accordingly, users' cognitive privacy may be violated and the right to cognitive privacy should protect individuals against the unconsented intrusion by third parties into the brain data as well as against the unauthorized collection of those data. This has caused a growing concern among users and industry experts that laws to protect the right to cognitive liberty, right to mental privacy, right to mental integrity, and the right to psychological continuity. In this paper, we propose an AI-enabled EEG model, namely Cognitive Privacy, that aims to protect data and classifies users and their tasks from EEG data. We present a model that protects data from disclosure using normalized correlation analysis and classifies subjects (i.e., a multi-classification problem) and their tasks (i.e., eye open and eye close as a binary classification problem) using a long-short term memory (LSTM) deep learning approach. The model has been evaluated using the EEG data set of PhysioNet BCI, and the results have revealed its high performance of classifying users and their tasks with achieving high data privacy.

Schilling, Robert, Werner, Mario, Nasahl, Pascal, Mangard, Stefan.  2018.  Pointing in the Right Direction - Securing Memory Accesses in a Faulty World. Proceedings of the 34th Annual Computer Security Applications Conference. :595-604.

Reading and writing memory are, besides computation, the most common operations a processor performs. The correctness of these operations is therefore essential for the proper execution of any program. However, as soon as fault attacks are considered, assuming that the hardware performs its memory operations as instructed is not valid anymore. In particular, attackers may induce faults with the goal of reading or writing incorrectly addressed memory, which can have various critical safety and security implications. In this work, we present a solution to this problem and propose a new method for protecting every memory access inside a program against address tampering. The countermeasure comprises two building blocks. First, every pointer inside the program is redundantly encoded using a multiresidue error detection code. The redundancy information is stored in the unused upper bits of the pointer with zero overhead in terms of storage. Second, load and store instructions are extended to link data with the corresponding encoded address from the pointer. Wrong memory accesses subsequently infect the data value allowing the software to detect the error. For evaluation purposes, we implemented our countermeasure into a RISC-V processor, tested it on a FPGA development board, and evaluated the induced overhead. Furthermore, a LLVM-based C compiler has been modified to automatically encode all data pointers, to perform encoded pointer arithmetic, and to emit the extended load/store instructions with linking support. Our evaluations show that the countermeasure induces an average overhead of 10 % in terms of code size and 7 % regarding runtime, which makes it suitable for practical adoption.

Schliep, Michael, Kariniemi, Ian, Hopper, Nicholas.  2017.  Is Bob Sending Mixed Signals? Proceedings of the 2017 on Workshop on Privacy in the Electronic Society. :31–40.

Demand for end-to-end secure messaging has been growing rapidly and companies have responded by releasing applications that implement end-to-end secure messaging protocols. Signal and protocols based on Signal dominate the secure messaging applications. In this work we analyze conversational security properties provided by the Signal Android application against a variety of real world adversaries. We identify vulnerabilities that allow the Signal server to learn the contents of attachments, undetectably re-order and drop messages, and add and drop participants from group conversations. We then perform proof-of-concept attacks against the application to demonstrate the practicality of these vulnerabilities, and suggest mitigations that can detect our attacks. The main conclusion of our work is that we need to consider more than confidentiality and integrity of messages when designing future protocols. We also stress that protocols must protect against compromised servers and at a minimum implement a trust but verify model.

Schlüter, F., Hetterscheid, E..  2017.  A Simulation Based Evaluation Approach for Supply Chain Risk Management Digitalization Scenarios. 2017 International Conference on Industrial Engineering, Management Science and Application (ICIMSA). :1–5.

Supply Chain wide proactive risk management based on real-time risk related information transparency is required to increase the security of modern, volatile supply chains. At this time, none or only limited empirical/objective information about digitalization benefits for supply chain risk management is available. A method is needed, which draws conclusion on the estimation of costs and benefits of digitalization initiatives. The paper presents a flexible simulation based approach for assessing digitalization scenarios prior to realization. The assessment approach is integrated into a framework and its applicability will be shown in a case study of a German steel producer, evaluating digitalization effects on the Mean Lead time-at-risk.

Schmeidl, Florian, Nazzal, Bara, Alalfi, Manar H..  2019.  Security Analysis for SmartThings IoT Applications. 2019 IEEE/ACM 6th International Conference on Mobile Software Engineering and Systems (MOBILESoft). :25–29.
This paper presents a fully automated static analysis approach and a tool, Taint-Things, for the identification of tainted flows in SmartThings IoT apps. Taint-Things accurately identified all tainted flows reported by one of the state-of the-art tools with at least 4 times improved performance. In addition, our approach reports potential vulnerable tainted flow in a form of a concise security slice, which could provide security auditors with an effective and precise tool to pinpoint security issues in SmartThings apps under test.
Schmerl, Bradley, Cámara, Javier, Gennari, Jeffrey, Garlan, David, Casanova, Paulo, Moreno, Gabriel A., Glazier, Thomas J., Barnes, Jeffrey M..  2014.  Architecture-based Self-protection: Composing and Reasoning About Denial-of-service Mitigations. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :2:1–2:12.

Security features are often hardwired into software applications, making it difficult to adapt security responses to reflect changes in runtime context and new attacks. In prior work, we proposed the idea of architecture-based self-protection as a way of separating adaptation logic from application logic and providing a global perspective for reasoning about security adaptations in the context of other business goals. In this paper, we present an approach, based on this idea, for combating denial-of-service (DoS) attacks. Our approach allows DoS-related tactics to be composed into more sophisticated mitigation strategies that encapsulate possible responses to a security problem. Then, utility-based reasoning can be used to consider different business contexts and qualities. We describe how this approach forms the underpinnings of a scientific approach to self-protection, allowing us to reason about how to make the best choice of mitigation at runtime. Moreover, we also show how formal analysis can be used to determine whether the mitigations cover the range of conditions the system is likely to encounter, and the effect of mitigations on other quality attributes of the system. We evaluate the approach using the Rainbow self-adaptive framework and show how Rainbow chooses DoS mitigation tactics that are sensitive to different business contexts.

Schmid, Stefan, Arquint, Linard, Gross, Thomas R..  2016.  Using Smartphones As Continuous Receivers in a Visible Light Communication System. Proceedings of the 3rd Workshop on Visible Light Communication Systems. :61–66.
Visible Light Communication (VLC) allows to reuse a lighting infrastructure for communication while its main purpose of illumination can be carried out at the same time. Light sources based on Light Emitting Diodes (LEDs) are attractive as they are inexpensive, ubiquitous, and allow rapid modulation. This paper describes how to integrate smartphones into such a communication system that supports networking for a wide range of devices, such as toys with single LEDs as transmitter and receivers as well as interconnected LED light bulbs. The main challenge is how to employ the smartphone without any (hardware) modification as a receiver, using the integrated camera as a (slow) light sampling device. This paper presents a simple software-based solution, exploiting the rolling shutter effect and slow motion video capturing capabilities of latest smartphones to enable continuous reception and real-time integration into an existing VLC system. Evaluation results demonstrate a working prototype and report communication distances up to 3m and a maximum data throughput of more than 1200b/s, improving upon previous work.
Schmidt, Mark, Pfeiffer, Tom, Grill, Christin, Huber, Robert, Jirauschek, Christian.  2019.  Coexistence of Intensity Pattern Types in Broadband Fourier Domain Mode Locked (FDML) Lasers. 2019 Conference on Lasers and Electro-Optics Europe European Quantum Electronics Conference (CLEO/Europe-EQEC). :1-1.

Fourier domain mode locked (FDML) lasers, in which the sweep period of the swept bandpass filter is synchronized with the roundtrip time of the optical field, are broadband and rapidly tunable fiber ring laser systems, which offer rich dynamics. A detailed understanding is important from a fundamental point of view, and also required in order to improve current FDML lasers which have not reached their coherence limit yet. Here, we study the formation of localized patterns in the intensity trace of FDML laser systems based on a master equation approach [1] derived from the nonlinear Schrödinger equation for polarization maintaining setups, which shows excellent agreement with experimental data. A variety of localized patterns and chaotic or bistable operation modes were previously discovered in [2–4] by investigating primarily quasi-static regimes within a narrow sweep bandwidth where a delay differential equation model was used. In particular, the formation of so-called holes which are characterized by a dip in the intensity trace and a rapid phase jump are described. Such holes have tentatively been associated with Nozaki-Bekki holes which are solutions to the complex Ginzburg-Landau equation. In Fig. 1 (b) to (d) small sections of a numerical solution of our master equation are presented for a partially dispersion compensated polarization maintaining FDML laser setup. Within our approach, we are able to study the full sweep dynamics over a broad sweep range of more than 100 nm. This allows us to identify different co-existing intensity patterns within a single sweep. In general, high frequency distortions in the intensity trace of FDML lasers [5] are mainly caused by synchronization mismatches caused by the fiber dispersion or a detuning of the roundtrip time of the optical field to the sweep period of the swept bandpass filter. This timing errors lead to rich and complex dynamics over many roundtrips and are a major source of noise, greatly affecting imaging and sensing applications. For example, the imaging quality in optical coherence tomography where FDML lasers are superior sources is significantly reduced [5].

Schmidt, Sabine S., Mazurczyk, Wojciech, Keller, Jörg, Caviglione, Luca.  2017.  A New Data-Hiding Approach for IP Telephony Applications with Silence Suppression. Proceedings of the 12th International Conference on Availability, Reliability and Security. :83:1–83:6.

Even if information hiding can be used for licit purposes, it is increasingly exploited by malware to exfiltrate data or to coordinate attacks in a stealthy manner. Therefore, investigating new methods for creating covert channels is fundamental to completely assess the security of the Internet. Since the popularity of the carrier plays a major role, this paper proposes to hide data within VoIP traffic. Specifically, we exploit Voice Activity Detection (VAD), which suspends the transmission during speech pauses to reduce bandwidth requirements. To create the covert channel, our method transforms a VAD-activated VoIP stream into a non-VAD one. Then, hidden information is injected into fake RTP packets generated during silence intervals. Results indicate that steganographically modified VAD-activated VoIP streams offer a good trade-off between stealthiness and steganographic bandwidth.

Schneeberger, Tanja, Scholtes, Mirella, Hilpert, Bernhard, Langer, Markus, Gebhard, Patrick.  2019.  Can Social Agents elicit Shame as Humans do? 2019 8th International Conference on Affective Computing and Intelligent Interaction (ACII). :164–170.
This paper presents a study that examines whether social agents can elicit the social emotion shame as humans do. For that, we use job interviews, which are highly evaluative situations per se. We vary the interview style (shame-eliciting vs. neutral) and the job interviewer (human vs. social agent). Our dependent variables include observational data regarding the social signals of shame and shame regulation as well as self-assessment questionnaires regarding the felt uneasiness and discomfort in the situation. Our results indicate that social agents can elicit shame to the same amount as humans. This gives insights about the impact of social agents on users and the emotional connection between them.
Schneider, Fred B..  2000.  Enforceable Security Policies. ACM Trans. Inf. Syst. Secur.. 3:30–50.
A precise characterization is given for the class of security policies enforceable with mechanisms that work by monitoring system execution, and automata are introduced for specifying exactly that class of security policies. Techniques to enforce security policies specified by such automata are also discussed.
Schneider, Fred B..  2000.  Enforceable Security Policies. ACM Trans. Inf. Syst. Secur.. 3:30–50.

A precise characterization is given for the class of security policies enforceable with mechanisms that work by monitoring system execution, and automata are introduced for specifying exactly that class of security policies. Techniques to enforce security policies specified by such automata are also discussed.

This article was identified by the SoS Best Scientific Cybersecurity Paper Competition Distinguished Experts as a Science of Security Significant Paper. The Science of Security Paper Competition was developed to recognize and honor recently published papers that advance the science of cybersecurity. During the development of the competition, members of the Distinguished Experts group suggested that listing papers that made outstanding contributions, empirical or theoretical, to the science of cybersecurity in earlier years would also benefit the research community.

Schneider, Jens, Bläser, Max, Wien, Mathias.  2018.  Sparse Coding Based Frequency Adaptive Loop Filtering for Video Coding. Proceedings of the 23rd Packet Video Workshop. :48–53.

In-loop filtering is an important task in video coding, as it refines both the reconstructed signal for display and the pictures used for inter-prediction. In order to remove coding artifacts, machine learning based methods are assumed to be beneficial, as they utilize some prior knowledge on the characteristics of raw images. In this contribution, a dictionary learning / sparse coding based inloop filter and a frequency adaptation model based on the lp-ballenergy in the spectral domain is proposed. Thereby the dictionary is trained on raw data and the algorithms are controlled mainly by the parameter for the sparsity. The frequency adaption model results in further improvement of the sparse coding based loop filter. Experimental results show that the proposed method results in coding gains up to l-4.6 % at peak and -1.74 % on average against HEVC in a Random Access coding configuration.

Schneider, S., Lansing, J., Fangjian Gao, Sunyaev, A..  2014.  A Taxonomic Perspective on Certification Schemes: Development of a Taxonomy for Cloud Service Certification Criteria. System Sciences (HICSS), 2014 47th Hawaii International Conference on. :4998-5007.

Numerous cloud service certifications (CSCs) are emerging in practice. However, in their striving to establish the market standard, CSC initiatives proceed independently, resulting in a disparate collection of CSCs that are predominantly proprietary, based on various standards, and differ in terms of scope, audit process, and underlying certification schemes. Although literature suggests that a certification's design influences its effectiveness, research on CSC design is lacking and there are no commonly agreed structural characteristics of CSCs. Informed by data from 13 expert interviews and 7 cloud computing standards, this paper delineates and structures CSC knowledge by developing a taxonomy for criteria to be assessed in a CSC. The taxonomy consists of 6 dimensions with 28 subordinate characteristics and classifies 328 criteria, thereby building foundations for future research to systematically develop and investigate the efficacy of CSC designs as well as providing a knowledge base for certifiers, cloud providers, and users.