Visible to the public Biblio

Found 1863 results

Filters: First Letter Of Title is S  [Clear All Filters]
A B C D E F G H I J K L M N O P Q R [S] T U V W X Y Z   [Show ALL]
Daniels, Wilfried, Hughes, Danny, Ammar, Mahmoud, Crispo, Bruno, Matthys, Nelson, Joosen, Wouter.  2017.  SΜV - the Security Microvisor: A Virtualisation-based Security Middleware for the Internet of Things. Proceedings of the 18th ACM/IFIP/USENIX Middleware Conference: Industrial Track. :36–42.
The Internet of Things (IoT) creates value by connecting digital processes to the physical world using embedded sensors, actuators and wireless networks. The IoT is increasingly intertwined with critical industrial processes, yet contemporary IoT devices offer limited security features, creating a large new attack surface and inhibiting the adoption of IoT technologies. Hardware security modules address this problem, however, their use increases the cost of embedded IoT devices. Furthermore, millions of IoT devices are already deployed without hardware security support. This paper addresses this problem by introducing a Security MicroVisor (SμV) middleware, which provides memory isolation and custom security operations using software virtualisation and assembly-level code verification. We showcase SμV by implementing a key security feature: remote attestation. Evaluation shows extremely low overhead in terms of memory, performance and battery lifetime for a representative IoT device.
Jayapalan, Avila, Savarinathan, Prem, Priya, Apoorva.  2019.  SystemVue based Secure data transmission using Gold codes. 2019 International Conference on Vision Towards Emerging Trends in Communication and Networking (ViTECoN). :1—4.

Wireless technology has seen a tremendous growth in the recent past. Orthogonal Frequency Division Multiplexing (OFDM) modulation scheme has been utilized in almost all the advanced wireless techniques because of the advantages it offers. Hence in this aspect, SystemVue based OFDM transceiver has been developed with AWGN as the channel noise. To mitigate the channel noise Convolutional code with Viterbi decoder has been depicted. Further to protect the information from the malicious users the data is scrambled with the aid of gold codes. The performance of the transceiver is analysed through various Bit Error Rate (BER) versus Signal to Noise Ratio (SNR) graphs.

Span, M. T., Mailloux, L. O., Grimaila, M. R., Young, W. B..  2018.  A Systems Security Approach for Requirements Analysis of Complex Cyber-Physical Systems. 2018 International Conference on Cyber Security and Protection of Digital Services (Cyber Security). :1–8.
Today's highly interconnected and technology reliant environment places greater emphasis on the need for dependably secure systems. This work addresses this problem by detailing a systems security analysis approach for understanding and eliciting security requirements for complex cyber-physical systems. First, a readily understandable description of key architectural analysis definitions and desirable characteristics is provided along with a survey of commonly used security architecture analysis approaches. Next, a tailored version of the System-Theoretic Process Analysis approach for Security (STPA-Sec) is detailed in three phases which supports the development of functional-level security requirements, architectural-level engineering considerations, and design-level security criteria. In particular, these three phases are aligned with the systems and software engineering processes defined in the security processes of NIST SP 800-160. Lastly, this work is important for advancing the science of systems security by providing a viable systems security analysis approach for eliciting, defining, and analyzing traceable security, safety, and resiliency requirements which support evaluation criteria that can be designed-for, built-to, and verified with confidence.
Tekinerdoğan, B., Özcan, K., Yağız, S., Yakın, İ.  2020.  Systems Engineering Architecture Framework for Physical Protection Systems. 2020 IEEE International Symposium on Systems Engineering (ISSE). :1–8.
A physical protection system (PPS) integrates people, procedures, and equipment for the protection of assets or facilities against theft, sabotage, or other malevolent intruder attacks. In this paper we focus on the architecture modeling of PPS to support the communication among stakeholders, analysis and guiding the systems development activities. A common practice for modeling architecture is by using an architecture framework that defines a coherent set of viewpoints. Existing systems engineering modeling approaches appear to be too general and fail to address the domain-specific aspects of PPSs. On the other hand, no dedicated architecture framework approach has been provided yet to address the specific concerns of PPS. In this paper, we present an architecture framework for PPS (PPSAF) that has been developed in a real industrial context focusing on the development of multiple PPSs. The architecture framework consists of six coherent set of viewpoints including facility viewpoint, threats and vulnerabilities viewpoint, deterrence viewpoint, detection viewpoint, delay viewpoint, and response viewpoint. We illustrate the application of the architecture framework for the design of a PPS architecture of a building.
[Anonymous].  2018.  A Systems Approach to Indicators of Compromise Utilizing Graph Theory. 2018 IEEE International Symposium on Technologies for Homeland Security (HST). :1–6.
It is common to record indicators of compromise (IoC) in order to describe a particular breach and to attempt to attribute a breach to a specific threat actor. However, many network security breaches actually involve multiple diverse modalities using a variety of attack vectors. Measuring and recording IoC's in isolation does not provide an accurate view of the actual incident, and thus does not facilitate attribution. A system's approach that describes the entire intrusion as an IoC would be more effective. Graph theory has been utilized to model complex systems of varying types and this provides a mathematical tool for modeling systems indicators of compromise. This current paper describes the applications of graph theory to creating systems-based indicators of compromise. A complete methodology is presented for developing systems IoC's that fully describe a complex network intrusion.
Tan, B., Biglari-Abhari, M., Salcic, Z..  2016.  A system-level security approach for heterogeneous MPSoCs. 2016 Conference on Design and Architectures for Signal and Image Processing (DASIP). :74–81.

Embedded systems are becoming increasingly complex as designers integrate different functionalities into a single application for execution on heterogeneous hardware platforms. In this work we propose a system-level security approach in order to provide isolation of tasks without the need to trust a central authority at run-time. We discuss security requirements that can be found in complex embedded systems that use heterogeneous execution platforms, and by regulating memory access we create mechanisms that allow safe use of shared IP with direct memory access, as well as shared libraries. We also present a prototype Isolation Unit that checks memory transactions and allows for dynamic configuration of permissions.

Zeng, Jing, Yang, Laurence T., Lin, Man, Shao, Zili, Zhu, Dakai.  2017.  System-Level Design Optimization for Security-Critical Cyber-Physical-Social Systems. ACM Trans. Embed. Comput. Syst.. 16:39:1–39:21.

Cyber-physical-social systems (CPSS), an emerging computing paradigm, have attracted intensive attentions from the research community and industry. We are facing various challenges in designing secure, reliable, and user-satisfied CPSS. In this article, we consider these design issues as a whole and propose a system-level design optimization framework for CPSS design where energy consumption, security-level, and user satisfaction requirements can be fulfilled while satisfying constraints for system reliability. Specifically, we model the constraints (energy efficiency, security, and reliability) as the penalty functions to be incorporated into the corresponding objective functions for the optimization problem. A smart office application is presented to demonstrate the feasibility and effectiveness of our proposed design optimization approach.

Mailloux, L. O., Sargeant, B. N., Hodson, D. D., Grimaila, M. R..  2017.  System-level considerations for modeling space-based quantum key distribution architectures. 2017 Annual IEEE International Systems Conference (SysCon). :1–6.

Quantum Key Distribution (QKD) is a revolutionary technology which leverages the laws of quantum mechanics to distribute cryptographic keying material between two parties with theoretically unconditional security. Terrestrial QKD systems are limited to distances of \textbackslashtextless;200 km in both optical fiber and line-of-sight free-space configurations due to severe losses during single photon propagation and the curvature of the Earth. Thus, the feasibility of fielding a low Earth orbit (LEO) QKD satellite to overcome this limitation is being explored. Moreover, in August 2016, the Chinese Academy of Sciences successfully launched the world's first QKD satellite. However, many of the practical engineering performance and security tradeoffs associated with space-based QKD are not well understood for global secure key distribution. This paper presents several system-level considerations for modeling and studying space-based QKD architectures and systems. More specifically, this paper explores the behaviors and requirements that researchers must examine to develop a model for studying the effectiveness of QKD between LEO satellites and ground stations.

Reshikeshan, Sree Subiksha M., Illindala, Mahesh S..  2020.  Systematically Encoded Polynomial Codes to Detect and Mitigate High-Status-Number Attacks in Inter-Substation GOOSE Communications. 2020 IEEE Industry Applications Society Annual Meeting. :1–7.
Inter-substation Generic Object Oriented Substation Events (GOOSE) communications that are used for critical protection functions have several cyber-security vulnerabilities. GOOSE messages are directly mapped to the Layer 2 Ethernet without network and transport layer headers that provide data encapsulation. The high-status-number attack is a malicious attack on GOOSE messages that allows hackers to completely take over intelligent electronic devices (IEDs) subscribing to GOOSE communications. The status-number parameter of GOOSE messages, stNum is tampered with in these attacks. Given the strict delivery time requirement of 3 ms for GOOSE messaging, it is infeasible to encrypt the GOOSE payload. This work proposes to secure the sensitive stNum parameter of the GOOSE payload using systematically encoded polynomial codes. Exploiting linear codes allows for the security features to be encoded in linear time, in contrast to complex hashing algorithms. At the subscribing IED, the security feature is used to verify that the stNum parameter has not been tampered with during transmission in the insecure medium. The decoding and verification using syndrome computation at the subscriber IED is also accomplished in linear time.
Mane, Y. D., Khot, U. P..  2020.  A Systematic Way to Implement Private Tor Network with Trusted Middle Node. 2020 International Conference for Emerging Technology (INCET). :1—6.

Initially, legitimate users were working under a normal web browser to do all activities over the internet [1]. To get more secure service and to get protection against Bot activity, the legitimate users switched their activity from Normal web browser to low latency anonymous communication such as Tor Browser. The Traffic monitoring in Tor Network is difficult as the packets are traveling from source to destination in an encrypted fashion and the Tor network hides its identity from destination. But lately, even the illegitimate users such as attackers/criminals started their activity on the Tor browser. The secured Tor network makes the detection of Botnet more difficult. The existing tools for botnet detection became inefficient against Tor-based bots because of the features of the Tor browser. As the Tor Browser is highly secure and because of the ethical issues, doing practical experiments on it is not advisable which could affect the performance and functionality of the Tor browser. It may also affect the endanger users in situations where the failure of Tor's anonymity has severe consequences. So, in the proposed research work, Private Tor Networks (PTN) on physical or virtual machines with dedicated resources have been created along with Trusted Middle Node. The motivation behind the trusted middle node is to make the Private Tor network more efficient and to increase its performance.

Saeed, Imtithal A., Selamat, Ali, Rohani, Mohd Foad, Krejcar, Ondrej, Chaudhry, Junaid Ahsenali.  2020.  A Systematic State-of-the-Art Analysis of Multi-Agent Intrusion Detection. IEEE Access. 8:180184–180209.
Multi-agent architectures have been successful in attaining considerable attention among computer security researchers. This is so, because of their demonstrated capabilities such as autonomy, embedded intelligence, learning and self-growing knowledge-base, high scalability, fault tolerance, and automatic parallelism. These characteristics have made this technology a de facto standard for developing ambient security systems to meet the open and dynamic nature of today's online communities. Although multi-agent architectures are increasingly studied in the area of computer security, there is still not enough empirical evidence on their performance in intrusions and attacks detection. The aim of this paper is to report the systematic literature review conducted in the context of specific research questions, to investigate multi-agent IDS architectures to highlight the issues that affect their performance in terms of detection accuracy and response time. We used pertinent keywords and terms to search and retrieve the most recent research studies, on multi-agent IDS architectures, from the major research databases and digital libraries such as SCOPUS, Springer, and IEEE Explore. The search processes resulted in a number of studies; among them, there were journal articles, book chapters, conference papers, dissertations, and theses. The obtained studies were assessed and filtered out, and finally, there were over 71 studies chosen to answer the research questions. The results of this study have shown that multi-agent architectures include several advantages that can help in the development of ambient IDS. However, it has been found that there are several issues in the current multi-agent IDS architectures that may degrade the accuracy and response time of intrusions and attacks detection. Based on our findings, the issues of multi-agent IDS architectures include limitations in the techniques, mechanisms, and schemes used for multi-agent IDS adaptation and learning, load balancing, scalability, fault-tolerance, and high communication overhead. It has also been found that new measurement metrics are required for evaluating multi-agent IDS architectures.
Babu, T. Kishore, Guruprakash, C. D..  2019.  A Systematic Review of the Third Party Auditing in Cloud Security: Security Analysis, Computation Overhead and Performance Evaluation. 2019 3rd International Conference on Computing Methodologies and Communication (ICCMC). :86–91.
Cloud storage offers a considerable efficiency and security to the user's data and provide high flexibility to the user. The hackers make attempt of several attacks to steal the data that increase the concern of data security in cloud. The Third Party Auditing (TPA) method is introduced to check the data integrity. There are several TPA methods developed to improve the privacy and efficiency of the data integrity checking method. Various methods involved in TPA, have been analyzed in this review in terms of function, security and overall performance. Merkel Hash Tree (MHT) method provides efficiency and security in checking the integrity of data. The computational overhead of the proof verify is also analyzed in this review. The communication cost of the most TPA methods observed as low and there is a need of improvement in security of the public auditing.
Tewari, Naveen, Datt, Gopal.  2021.  A Systematic Review of Security Issues and challenges with Futuristic Wearable Internet of Things (IoTs). 2021 International Conference on Technological Advancements and Innovations (ICTAI). :319—323.
Privacy and security are the key challenges of wearable IoTs. Smart wearables are becoming popular choice of people because of their indispensable application in the field of clinical medication and medical care, wellbeing the executives, working environments, training, and logical examination. Currently, IoT is facing several challenges, such as- user unawareness, lack of efficient security protocols, vulnerable wireless communication and device management, and improper device management. The paper investigates a efficient audit of safety and protection issues involved in wearable IoT devices with the following structure, as- (i) Background of IoT systems and applications (ii) Security and privacy issues in IoT (iii) Popular wearable IoTs in demand (iv) Highlight the existing IoT security and privacy solutions, and (v) Approaches to secure the futuristic IoT based environment. Finally, this study summarized with security vulnerabilities in IoT, Countermeasures and existing security and privacy solutions, and futuristic smart wearables.
Hettiarachchi, Charitha, Do, Hyunsook.  2019.  A Systematic Requirements and Risks-Based Test Case Prioritization Using a Fuzzy Expert System. 2019 IEEE 19th International Conference on Software Quality, Reliability and Security (QRS). :374–385.

The use of risk information can help software engineers identify software components that are likely vulnerable or require extra attention when testing. Some studies have shown that the requirements risk-based approaches can be effective in improving the effectiveness of regression testing techniques. However, the risk estimation processes used in such approaches can be subjective, time-consuming, and costly. In this research, we introduce a fuzzy expert system that emulates human thinking to address the subjectivity related issues in the risk estimation process in a systematic and an efficient way and thus further improve the effectiveness of test case prioritization. Further, the required data for our approach was gathered by employing a semi-automated process that made the risk estimation process less subjective. The empirical results indicate that the new prioritization approach can improve the rate of fault detection over several existing test case prioritization techniques, while reducing threats to subjective risk estimation.

Mozaffari-Kermani, M., Sur-Kolay, S., Raghunathan, A., Jha, N. K..  2015.  Systematic Poisoning Attacks on and Defenses for Machine Learning in Healthcare. IEEE Journal of Biomedical and Health Informatics. 19:1893–1905.

Machine learning is being used in a wide range of application domains to discover patterns in large datasets. Increasingly, the results of machine learning drive critical decisions in applications related to healthcare and biomedicine. Such health-related applications are often sensitive, and thus, any security breach would be catastrophic. Naturally, the integrity of the results computed by machine learning is of great importance. Recent research has shown that some machine-learning algorithms can be compromised by augmenting their training datasets with malicious data, leading to a new class of attacks called poisoning attacks. Hindrance of a diagnosis may have life-threatening consequences and could cause distrust. On the other hand, not only may a false diagnosis prompt users to distrust the machine-learning algorithm and even abandon the entire system but also such a false positive classification may cause patient distress. In this paper, we present a systematic, algorithm-independent approach for mounting poisoning attacks across a wide range of machine-learning algorithms and healthcare datasets. The proposed attack procedure generates input data, which, when added to the training set, can either cause the results of machine learning to have targeted errors (e.g., increase the likelihood of classification into a specific class), or simply introduce arbitrary errors (incorrect classification). These attacks may be applied to both fixed and evolving datasets. They can be applied even when only statistics of the training dataset are available or, in some cases, even without access to the training dataset, although at a lower efficacy. We establish the effectiveness of the proposed attacks using a suite of six machine-learning algorithms and five healthcare datasets. Finally, we present countermeasures against the proposed generic attacks that are based on tracking and detecting deviations in various accuracy metrics, and benchmark their effectiveness.

Vieira, Alfredo Menezes, Junior, Rubens de Souza Matos, Ribeiro, Admilson de Ribamar Lima.  2021.  Systematic Mapping on Prevention of DDoS Attacks on Software Defined Networks. 2021 IEEE International Systems Conference (SysCon). :1—8.
Cyber attacks are a major concern for network administrators as the occurrences of such events are continuously increasing on the Internet. Software-defined networks (SDN) enable many management applications, but they may also become targets for attackers. Due to the separation of the data plane and the control plane, the controller appears as a new element in SDN networks, allowing centralized control of the network, becoming a strategic target in carrying out an attack. According to reports generated by security labs, the frequency of the distributed denial of service (DDoS) attacks has seen an increase in recent years, characterizing a major threat to the SDN. However, few research papers address the prevention of DDoS attacks on SDN. Therefore, this work presents a Systematic Mapping of Literature, aiming at identifying, classifying, and thus disseminating current research studies that propose techniques and methods for preventing DDoS attacks in SDN. When answering these questions, it was determined that the SDN controller was vulnerable to possible DDoS attacks. No prevention methods were found in the literature for the first phase of the attack (when attackers try to deceive users and infect the host). Therefore, the security of software-defined networks still needs improvement over DDoS attacks, despite the evident risk of an attack targeting the SDN controller.
Huang, Keman, Madnick, Stuart, Choucri, Nazli, Zhang, Fang.  2021.  A Systematic Framework to Understand Transnational Governance for Cybersecurity Risks from Digital Trade. Global Policy. 12:625–638.
Governing cybersecurity risks from digital trade is a growing responsibility for governments and corporations. This study develops a systematic framework to delineate and analyze the strategies that governments and corporations take to address cybersecurity risks from digital trade. It maps out the current landscape based on a collection of 75 cases where governments and corporations interact to govern transnational cybersecurity risks. This study reveals that: first, governing cybersecurity risks from digital trade is a global issue whereby most governments implement policies with concerning that the cybersecurity risks embedded within purchasing transnational digital products can influence their domestic political and societal systems. Second, governments dominates the governance interactions by implementing trade policies whereas corporations simply comply. Corporations do, however, have chances to take more active roles in constructing the governance system. Third, supply chain cybersecurity risks have more significant impacts on governance mode between governments and corporations whereas concerns on different national cybersecurity risks do not. Fourth, the interactions between governments and corporations reveal the existence of loops that can amplify or reduce cybersecurity risks. This provides policy implications on transnational cybersecurity governance for policy makers and business leaders to consider their potential options and understand the global digital trade environment when cybersecurity and digital trade overlap.
Chhokra, Ajay, Kulkarni, Amogh, Hasan, Saqib, Dubey, Abhishek, Mahadevan, Nagabhushan, Karsai, Gabor.  2017.  A Systematic Approach of Identifying Optimal Load Control Actions for Arresting Cascading Failures in Power Systems. Proceedings of the 2Nd Workshop on Cyber-Physical Security and Resilience in Smart Grids. :41–46.
Cascading outages in power networks cause blackouts which lead to huge economic and social consequences. The traditional form of load shedding is avoidable in many cases by identifying optimal load control actions. However, if there is a change in the system topology (adding or removing loads, lines etc), the calculations have to be performed again. This paper addresses this problem by providing a workflow that 1) generates system models from IEEE CDF specifications, 2) identifies a collection of blackout causing contingencies, 3) dynamically sets up an optimization problem, and 4) generates a table of mitigation strategies in terms of minimal load curtailment. We demonstrate the applicability of our proposed methodology by finding load curtailment actions for N-k contingencies (k = 1, 2, 3) in IEEE 14 Bus system.
Nguyen, Hoai Viet, Lo Iacono, Luigi, Federrath, Hannes.  2018.  Systematic Analysis of Web Browser Caches. Proceedings of the 2Nd International Conference on Web Studies. :64–71.
The caching of frequently requested web resources is an integral part of the web ever since. Cacheability is the main pillar for the web's scalability and an important mechanism for optimizing resource consumption and performance. Caches exist in many variations and locations on the path between web client and server with the browser cache being ubiquitous to date. Web developers need to have a profound understanding of the concepts and policies of web caching even when exploiting these advantages is not relevant. Neglecting web caching may otherwise result in more serve consequences than the simple loss of scalability and efficiency. Recent misuse of web caching systems shows to affect the application's behavior as well as privacy and security. In this paper we introduce a tool-based approach to disburden web developers while keeping them informed about caching influences. Our first contribution is a structured test suite containing 397 web caching test cases. In order to make this collection easily adoptable we introduce an automated testing tool for executing the test cases against web browsers. Based on the developed testing tool we conduct a systematic analysis on the behavior of web browser caches and their compliance with relevant caching standards. Our findings on desktop and mobile versions of Chrome, Firefox, Safari and Edge show many diversities as well as discrepancies. Appropriate tooling supports web developers in uncovering such adversities. As our baseline of test cases is specified using a specification language that enables extensibility, developers as well as administrators and researchers can systematically add and empirically explore caching properties of interest even in non-browser scenarios.
Checkoway, Stephen, Maskiewicz, Jacob, Garman, Christina, Fried, Joshua, Cohney, Shaanan, Green, Matthew, Heninger, Nadia, Weinmann, Ralf-Philipp, Rescorla, Eric, Shacham, Hovav.  2016.  A Systematic Analysis of the Juniper Dual EC Incident. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. :468–479.

In December 2015, Juniper Networks announced multiple security vulnerabilities stemming from unauthorized code in ScreenOS, the operating system for their NetScreen VPN routers. The more sophisticated of these vulnerabilities was a passive VPN decryption capability, enabled by a change to one of the elliptic curve points used by the Dual EC pseudorandom number generator. In this paper, we describe the results of a full independent analysis of the ScreenOS randomness and VPN key establishment protocol subsystems, which we carried out in response to this incident. While Dual EC is known to be insecure against an attacker who can choose the elliptic curve parameters, Juniper had claimed in 2013 that ScreenOS included countermeasures against this type of attack. We find that, contrary to Juniper's public statements, the ScreenOS VPN implementation has been vulnerable since 2008 to passive exploitation by an attacker who selects the Dual EC curve point. This vulnerability arises due to apparent flaws in Juniper's countermeasures as well as a cluster of changes that were all introduced concurrently with the inclusion of Dual EC in a single 2008 release. We demonstrate the vulnerability on a real NetScreen device by modifying the firmware to install our own parameters, and we show that it is possible to passively decrypt an individual VPN session in isolation without observing any other network traffic. We investigate the possibility of passively fingerprinting ScreenOS implementations in the wild. This incident is an important example of how guidelines for random number generation, engineering, and validation can fail in practice.

Hibshi, Hanan.  2016.  Systematic Analysis of Qualitative Data in Security. Proceedings of the Symposium and Bootcamp on the Science of Security. :52–52.

This tutorial will introduce participants to Grounded Theory, which is a qualitative framework to discover new theory from an empirical analysis of data. This form of analysis is particularly useful when analyzing text, audio or video artifacts that lack structure, but contain rich descriptions. We will frame Grounded Theory in the context of qualitative methods and case studies, which complement quantitative methods, such as controlled experiments and simulations. We will contrast the approaches developed by Glaser and Strauss, and introduce coding theory - the most prominent qualitative method for performing analysis to discover Grounded Theory. Topics include coding frames, first- and second-cycle coding, and saturation. We will use examples from security interview scripts to teach participants: developing a coding frame, coding a source document to discover relationships in the data, developing heuristics to resolve ambiguities between codes, and performing second-cycle coding to discover relationships within categories. Then, participants will learn how to discover theory from coded data. Participants will further learn about inter-rater reliability statistics, including Cohen's and Fleiss' Kappa, Krippendorf's Alpha, and Vanbelle's Index. Finally, we will review how to present Grounded Theory results in publications, including how to describe the methodology, report observations, and describe threats to validity.

Hibshi, Hanan.  2016.  Systematic Analysis of Qualitative Data in Security. Proceedings of the Symposium and Bootcamp on the Science of Security. :52–52.
This tutorial will introduce participants to Grounded Theory, which is a qualitative framework to discover new theory from an empirical analysis of data. This form of analysis is particularly useful when analyzing text, audio or video artifacts that lack structure, but contain rich descriptions. We will frame Grounded Theory in the context of qualitative methods and case studies, which complement quantitative methods, such as controlled experiments and simulations. We will contrast the approaches developed by Glaser and Strauss, and introduce coding theory - the most prominent qualitative method for performing analysis to discover Grounded Theory. Topics include coding frames, first- and second-cycle coding, and saturation. We will use examples from security interview scripts to teach participants: developing a coding frame, coding a source document to discover relationships in the data, developing heuristics to resolve ambiguities between codes, and performing second-cycle coding to discover relationships within categories. Then, participants will learn how to discover theory from coded data. Participants will further learn about inter-rater reliability statistics, including Cohen's and Fleiss' Kappa, Krippendorf's Alpha, and Vanbelle's Index. Finally, we will review how to present Grounded Theory results in publications, including how to describe the methodology, report observations, and describe threats to validity.
LeSaint, J., Reed, M., Popick, P..  2015.  System security engineering vulnerability assessments for mission-critical systems and functions. 2015 Annual IEEE Systems Conference (SysCon) Proceedings. :608–613.

This paper describes multiple system security engineering techniques for assessing system security vulnerabilities and discusses the application of these techniques at different system maturity points. The proposed vulnerability assessment approach allows a systems engineer to identify and assess vulnerabilities early in the life cycle and to continually increase the fidelity of the vulnerability identification and assessment as the system matures.

Colesky, Michael, Caiza, Julio C..  2018.  A System of Privacy Patterns for Informing Users: Creating a Pattern System. Proceedings of the 23rd European Conference on Pattern Languages of Programs. :16:1-16:11.

The General Data Protection Regulation mandates data protection in the European Union. This includes data protection by design and having privacy-preserving defaults. This legislation has been in force since May 2018, promising severe consequences for violation. Fulfilling its mandate for data protection is not trivial, though. One approach for realizing this is the use of privacy design patterns. We have recently started consolidating such patterns into useful collections. In this paper we improve a subset of these, constructing a pattern system. This helps to identify contextually appropriate patterns. It better illustrates their application and relation to each other. The pattern system guides software developers, so that they can help users understand how their information system uses personal data. To achieve this, we rewrite our patterns to meet specific requirements. In particular, we add implementability and interconnection, while improving consistency and organization. This results in a system of patterns for informing users.

Moskvichev, A. D., Dolgachev, M. V..  2020.  System of Collection and Analysis Event Log from Sources under Control of Windows Operating System. 2020 International Multi-Conference on Industrial Engineering and Modern Technologies (FarEastCon). :1—5.

The purpose of this work is to implement a universal system for collecting and analyzing event logs from sources that use the Windows operating system. The authors use event-forwarding technology to collect data from logs. Security information and event management detects incidents from received events. The authors analyze existing methods for transmitting event log entries from sources running the Windows operating system. This article describes in detail how to connect event sources running on the Windows operating system to the event collector without connecting to a domain controller. Event sources are authenticated using certificates created by the event collector. The authors suggest a scheme for connecting the event collector to security information and event management. Security information and event management must meet the requirements for use in conjunction with event forwarding technology. The authors of the article demonstrate the scheme of the test stand and the result of testing the event forwarding technology.