# Biblio

Found 16 results

Filters: Keyword is complex systems  [Clear All Filters]
2021-09-07
.  2020.  2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC). :3221–3228.
Conversational agents (CAs), often referred to as chatbots, are being widely deployed within existing commercial frameworks and online service websites. As society moves further into incorporating data rich systems, like the internet of things (IoT), into daily life, it is expected that conversational agents will take on an increasingly important role to help users manage these complex systems. In this, the concept of personality is becoming increasingly important, as we seek for more human-friendly ways to interact with these CAs. In this work a conceptual framework is proposed that considers how existing standard psychological and persona models could be mapped to different kinds of CA functionality outside of strictly dialogue. As CAs become more diverse in their abilities, and more integrated with different kinds of systems, it is important to consider how function can be impacted by the design of agent personality, whether intentionally designed or not. Based on this framework, derived archetype classes of CAs are presented as starting points that can hopefully aid designers, developers, and the curious, into thinking about how to work toward better CA personality development.
2021-03-29
.  2020.  2020 Annual Reliability and Maintainability Symposium (RAMS). :1—7.

Safety and security of complex critical infrastructures is very important for economic, environmental and social reasons. The interdisciplinary and inter-system dependencies within these infrastructures introduce difficulties in the safety and security design. Late discovery of safety and security design weaknesses can lead to increased costs, additional system complexity, ineffective mitigation measures and delays to the deployment of the systems. Traditionally, safety and security assessments are handled using different methods and tools, although some concepts are very similar, by specialized experts in different disciplines and are performed at different system design life-cycle phases.The methodology proposed in this paper supports a concurrent safety and security Defense in Depth (DiD) assessment at an early design phase and it is designed to handle safety and security at a high level and not focus on specific practical technologies. It is assumed that regardless of the perceived level of security defenses in place, a determined (motivated, capable and/or well-funded) attacker can find a way to penetrate a layer of defense. While traditional security research focuses on removing vulnerabilities and increasing the difficulty to exploit weaknesses, our higher-level approach focuses on how the attacker's reach can be limited and to increase the system's capability for detection, identification, mitigation and tracking. The proposed method can assess basic safety and security DiD design principles like Redundancy, Physical separation, Functional isolation, Facility functions, Diversity, Defense lines/Facility and Computer Security zones, Safety classes/Security Levels, Safety divisions and physical gates/conduits (as defined by the International Atomic Energy Agency (IAEA) and international standards) concurrently and provide early feedback to the system engineer. A prototype tool is developed that can parse the exported project file of the interdisciplinary model. Based on a set of safety and security attributes, the tool is able to assess aspects of the safety and security DiD capabilities of the design. Its results can be used to identify errors, improve the design and cut costs before a formal human expert inspection. The tool is demonstrated on a case study of an early conceptual design of a complex system of a nuclear power plant.

2021-03-01
.  2020.  2020 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE). :1–8.
The classical SWOT methodology and many of the tools based on it used so far are very static, used for one stable project and lacking dynamics [1]. This paper proposes the idea of combining several SWOT analyses enriched with computing with words (CWW) paradigm into a single network. In this network, individual analysis of the situation is treated as the node. The whole structure is based on fuzzy cognitive maps (FCM) that have forward and backward chaining, so it is called fuzzy SWOT maps. Fuzzy SWOT maps methodology newly introduces the dynamics that projects are interacting, what exists in a real dynamic environment. The whole fuzzy SWOT maps network structure has explainable artificial intelligence (XAI) traits because each node in this network is a "white box"-all the reasoning chain can be tracked and checked why a particular decision has been made, which increases explainability by being able to check the rules to determine why a particular decision was made or why and how one project affects another. To confirm the vitality of the approach, a case with three interacting projects has been analyzed with a developed prototypical software tool and results are delivered.
2020-10-05
.  2019.  2019 Military Communications and Information Systems Conference (MilCIS). :1–6.
Complex military systems are typically cyber-physical systems which are the targets of high level threat actors, and must be able to operate within a highly contested cyber environment. There is an emerging need to provide a strong level of assurance against these threat actors, but the process by which this assurance can be tested and evaluated is not so clear. This paper outlines an initial framework developed through research for evaluating the cyber-worthiness of complex mission critical systems using threat models developed in SysML. The framework provides a visual model of the process by which a threat actor could attack the system. It builds on existing concepts from system safety engineering and expands on how to present the risks and mitigations in an understandable manner.
2020-09-28
.  2019.  2019 4th International Conference on System Reliability and Safety (ICSRS). :1–9.
The Smart Grid is the leading example when talking about complex and critical System-of-Systems (SoS). Specifically regarding the Smart Grids criticality, dependability is a central quality attribute to strive for. Combined with the desire of agility in modern development, conventional systems engineering methods reach their limits in coping with these requirements. However, approaches from model-based or model-driven engineering can reduce complexity and encourage development with rapidly changing requirements. Model-Driven Engineering (MDE) is known to be more successful in a domain specific manner. For that reason, an approach for Domain Specific Systems Engineering (DSSE) in the Smart Grid has already been specially investigated. This Model-Driven Architecture (MDA) approach especially aims the comprehensibility of complex systems. In this context, the traceability of requirements is a centrally pursued attribute. However, achieving continuing traceability between the model of a system and the concrete implementation is still an open issue. To close this gap, the present research paper introduces a Model-Centric Software Development (MCSD) solution for Smart Grid applications. Based on two exploratory case studies, the focus finally lies on the automated generation of partial implementation artifacts and the evaluation of traceability, based on dedicated functional aspects.
2020-07-24
.  2019.  2019 18th European Control Conference (ECC). :2789—2795.
Many complex dynamical systems consist of a large number of interacting subsystems that operate harmoniously and make decisions that are designed for the benefit of the entire enterprise. If, in an attempt to disrupt the operation of the entire system, one subsystem gets attacked and is made to operate in a manner that is adversarial with the others, then the entire system suffers, resulting in an adversarial decision-making environment among its subsystems. Such an environment may affect not only the decision-making process of the attacked subsystem but also possibly the other remaining subsystems as well. The disruption caused by the attacked subsystem may cause the remaining subsystems to either coalesce as a unified team making team-based decisions, or disintegrate and act as independent decision-making entities. The decision-making process in these types of complex systems of systems is best analyzed within the general framework of cooperative and non-cooperative game theory. In this paper, we will develop an analysis that provides a theoretical basis for modeling the decision-making process in such complex systems. We show how cooperation among the subsystems can produce Noninferior Nash Strategies (NNS) that are fair and acceptable to all subsystems within the team while at the same time provide the subsystems in the team with the security of the Nash equilibrium against the opposing attacked subsystem. We contrast these strategies with the all Nash Strategies (NS) that would result if the operation of the entire system disintegrated and became adversarial among all subsystems as a result of the attack. An example of a system consisting of three subsystems with one opposing subsystem as a result of an attack is included to illustrate the results.
2020-07-06
.  2019.  2019 16th International ISC (Iranian Society of Cryptology) Conference on Information Security and Cryptology (ISCISC). :92–98.
IoT-based services are widely increasing due to their advantages such as economy, automation, and comfort. Smart cities are among major applications of IoT-based systems. However, security and privacy threats are vital issues challenging the utilization of such services. Connectivity nature, variety of data technology, and volume of data maintained through these systems make their security analysis a difficult process. Threat modeling is one the best practices for security analysis, especially for complex systems. This paper proposes a threat extraction method for IoT-based systems. We elaborate on a smart city scenario with three services including lighting, car parking, and waste management. Investigating on these services, firstly, we identify thirty-two distinct threat types. Secondly, we distinguish threat root causes by associating a threat to constituent parts of the IoT-based system. In this way, threat instances can be extracted using the proposed derivation rules. Finally, we evaluate our method on a smart car parking scenario as well as on an E-Health system and identify more than 50 threat instances in each cases to show that the method can be easily generalized for other IoT-based systems whose constituent parts are known.
2020-04-13
.  2019.  2019 IEEE International Conference on Smart Computing (SMARTCOMP). :104–109.
Policy based Security Management in an accepted practice in the industry, and required to simplify the administrative overhead associated with security management in complex systems. However, the growing dynamicity, complexity and scale of modern systems makes it difficult to write the security policies manually. Using AI, we can generate policies automatically. Security policies generated automatically can reduce the manual burden introduced in defining policies, but their impact on the overall security of a system is unclear. In this paper, we discuss the security metrics that can be associated with a system using generative policies, and provide a simple model to determine the conditions under which generating security policies will be beneficial to improve the security of the system. We also show that for some types of security metrics, a system using generative policies can be considered as equivalent to a system using manually defined policies, and the security metrics of the generative policy based system can be mapped to the security metrics of the manual system and vice-versa.
2020-01-20
.  2019.  2019 15th International Conference on Distributed Computing in Sensor Systems (DCOSS). :400–407.

Bluetooth Low Energy is a fast growing protocol which has gained wide acceptance during last years. Key features for this growth are its high data rate and its ultra low energy consumption, making it the perfect candidate for piconets. However, the lack of expandability without serious impact on its energy consumption profile, prevents its adoption on more complex systems which depend on long network lifetime. Thus, a lot of academic research has been focused on the solution of BLE expandability problem and BLE mesh has been introduced on the latest Bluetooth version. In our point of view, most of the related work cannot be efficiently implemented in networks which are mostly comprised of constrained-resource nodes. Thus, we propose a new energy efficient tree algorithm for BLE static constrained-resources networks, which achieves a longer network lifetime by both reducing as much as possible the number of needed connection events and balancing the energy dissipation in the network.

2019-11-12
Padon, Oded.  2018.  2018 Formal Methods in Computer Aided Design (FMCAD). :1-1.

Formal verification of infinite-state systems, and distributed systems in particular, is a long standing research goal. In the deductive verification approach, the programmer provides inductive invariants and pre/post specifications of procedures, reducing the verification problem to checking validity of logical verification conditions. This check is often performed by automated theorem provers and SMT solvers, substantially increasing productivity in the verification of complex systems. However, the unpredictability of automated provers presents a major hurdle to usability of these tools. This problem is particularly acute in case of provers that handle undecidable logics, for example, first-order logic with quantifiers and theories such as arithmetic. The resulting extreme sensitivity to minor changes has a strong negative impact on the convergence of the overall proof effort.

2019-10-23
.  2018.  2018 11th International Conference on IT Security Incident Management IT Forensics (IMF). :115-133.

To manage cybersecurity risks in practice, a simple yet effective method to assess suchs risks for individual systems is needed. With time-to-compromise (TTC), McQueen et al. (2005) introduced such a metric that measures the expected time that a system remains uncompromised given a specific threat landscape. Unlike other approaches that require complex system modeling to proceed, TTC combines simplicity with expressiveness and therefore has evolved into one of the most successful cybersecurity metrics in practice. We revisit TTC and identify several mathematical and methodological shortcomings which we address by embedding all aspects of the metric into the continuous domain and the possibility to incorporate information about vulnerability characteristics and other cyber threat intelligence into the model. We propose $\beta$-TTC, a formal extension of TTC which includes information from CVSS vectors as well as a continuous attacker skill based on a $\beta$-distribution. We show that our new metric (1) remains simple enough for practical use and (2) gives more realistic predictions than the original TTC by using data from a modern and productively used vulnerability database of a national CERT.

2019-09-26
.  2018.  2018 41st International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO). :1460-1466.

Code churn has been successfully used to identify defect inducing changes in software development. Our recent analysis of the cross-release code churn showed that several design metrics exhibit moderate correlation with the number of defects in complex systems. The goal of this paper is to explore whether cross-release code churn can be used to identify critical design change and contribute to prediction of defects for software in evolution. In our case study, we used two types of data from consecutive releases of open-source projects, with and without cross-release code churn, to build standard prediction models. The prediction models were trained on earlier releases and tested on the following ones, evaluating the performance in terms of AUC, GM and effort aware measure Pop. The comparison of their performance was used to answer our research question. The obtained results showed that the prediction model performs better when cross-release code churn is included. Practical implication of this research is to use cross-release code churn to aid in safe planning of next release in software development.

2019-07-01
.  2018.  2018 11th International Conference on IT Security Incident Management IT Forensics (IMF). :115–133.

To manage cybersecurity risks in practice, a simple yet effective method to assess suchs risks for individual systems is needed. With time-to-compromise (TTC), McQueen et al. (2005) introduced such a metric that measures the expected time that a system remains uncompromised given a specific threat landscape. Unlike other approaches that require complex system modeling to proceed, TTC combines simplicity with expressiveness and therefore has evolved into one of the most successful cybersecurity metrics in practice. We revisit TTC and identify several mathematical and methodological shortcomings which we address by embedding all aspects of the metric into the continuous domain and the possibility to incorporate information about vulnerability characteristics and other cyber threat intelligence into the model. We propose β-TTC, a formal extension of TTC which includes information from CVSS vectors as well as a continuous attacker skill based on a β-distribution. We show that our new metric (1) remains simple enough for practical use and (2) gives more realistic predictions than the original TTC by using data from a modern and productively used vulnerability database of a national CERT.

2018-09-28
.  2017.  Proceedings of the 2017 International Conference of The Computational Social Science Society of the Americas. :17:1–17:1.
In a world of ever-increasing systems interdependence, effective cybersecurity policy design seems to be one of the most critically understudied elements of our national security strategy. Enterprise cyber technologies are often implemented without much regard to the interactions that occur between humans and the new technology. Furthermore, the interactions that occur between individuals can often have an impact on the newly employed technology as well. Without a rigorous, evidence-based approach to ground an employment strategy and elucidate the emergent organizational needs that will come with the fielding of new cyber capabilities, one is left to speculate on the impact that novel technologies will have on the aggregate functioning of the enterprise. In this paper, we will explore a scenario in which a hypothetical government agency applies a complexity science perspective, supported by agent-based modeling, to more fully understand the impacts of strategic policy decisions. We present a model to explore the socio-technical dynamics of these systems, discuss lessons using this platform, and suggest further research and development.
2018-02-15
.  2017.  2017 IEEE International Conference on Data Mining (ICDM). :1003–1008.

Complex systems are prevalent in many fields such as finance, security and industry. A fundamental problem in system management is to perform diagnosis in case of system failure such that the causal anomalies, i.e., root causes, can be identified for system debugging and repair. Recently, invariant network has proven a powerful tool in characterizing complex system behaviors. In an invariant network, a node represents a system component, and an edge indicates a stable interaction between two components. Recent approaches have shown that by modeling fault propagation in the invariant network, causal anomalies can be effectively discovered. Despite their success, the existing methods have a major limitation: they typically assume there is only a single and global fault propagation in the entire network. However, in real-world large-scale complex systems, it's more common for multiple fault propagations to grow simultaneously and locally within different node clusters and jointly define the system failure status. Inspired by this key observation, we propose a two-phase framework to identify and rank causal anomalies. In the first phase, a probabilistic clustering is performed to uncover impaired node clusters in the invariant network. Then, in the second phase, a low-rank network diffusion model is designed to backtrack causal anomalies in different impaired clusters. Extensive experimental results on real-life datasets demonstrate the effectiveness of our method.

2015-05-06
Hardy, T.L..  2014.  Reliability and Maintainability Symposium (RAMS), 2014 Annual. :1-6.

Decreasing the potential for catastrophic consequences poses a significant challenge for high-risk industries. Organizations are under many different pressures, and they are continuously trying to adapt to changing conditions and recover from disturbances and stresses that can arise from both normal operations and unexpected events. Reducing risks in complex systems therefore requires that organizations develop and enhance traits that increase resilience. Resilience provides a holistic approach to safety, emphasizing the creation of organizations and systems that are proactive, interactive, reactive, and adaptive. This approach relies on disciplines such as system safety and emergency management, but also requires that organizations develop indicators and ways of knowing when an emergency is imminent. A resilient organization must be adaptive, using hands-on activities and lessons learned efforts to better prepare it to respond to future disruptions. It is evident from the discussions of each of the traits of resilience, including their limitations, that there are no easy answers to reducing safety risks in complex systems. However, efforts to strengthen resilience may help organizations better address the challenges associated with the ever-increasing complexities of their systems.