Visible to the public Biblio

Found 134 results

Filters: Keyword is software engineering  [Clear All Filters]
2022-08-12
Choi, Heeyoung, Young, Kang Ju.  2021.  Practical Approach of Security Enhancement Method based on the Protection Motivation Theory. 2021 21st ACIS International Winter Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD-Winter). :96—97.
In order to strengthen information security, practical solutions to reduce information security stress are needed because the motivation of the members of the organization who use it is needed to work properly. Therefore, this study attempts to suggest the key factors that can enhance security while reducing the information security stress of organization members. To this end, based on the theory of protection motivation, trust and security stress in information security policies are set as mediating factors to explain changes in security reinforcement behavior, and risk, efficacy, and reaction costs of cyberattacks are considered as prerequisites. Our study suggests a solution to the security reinforcement problem by analyzing the factors that influence the behavior of organization members that can raise the protection motivation of the organization members.
2022-08-03
Nakano, Yuto, Nakamura, Toru, Kobayashi, Yasuaki, Ozu, Takashi, Ishizaka, Masahito, Hashimoto, Masayuki, Yokoyama, Hiroyuki, Miyake, Yutaka, Kiyomoto, Shinsaku.  2021.  Automatic Security Inspection Framework for Trustworthy Supply Chain. 2021 IEEE/ACIS 19th International Conference on Software Engineering Research, Management and Applications (SERA). :45—50.
Threats and risks against supply chains are increasing and a framework to add the trustworthiness of supply chain has been considered. In this framework, organisations in the supply chain validate the conformance to the pre-defined requirements. The results of validations are linked each other to achieve the trustworthiness of the entire supply chain. In this paper, we further consider this framework for data supply chains. First, we implement the framework and evaluate the performance. The evaluation shows 500 digital evidences (logs) can be checked in 0.28 second. We also propose five methods to improve the performance as well as five new functionalities to improve usability. With these functionalities, the framework also supports maintaining the certificate chain.
2022-07-28
[Anonymous].  2021.  An Automated Pipeline for Privacy Leak Analysis of Android Applications. 2021 36th IEEE/ACM International Conference on Automated Software Engineering (ASE). :1048—1050.
We propose an automated pipeline for analyzing privacy leaks in Android applications. By using a combination of dynamic and static analysis, we validate the results from each other to improve accuracy. Compare to the state-of-the-art approaches, we not only capture the network traffic for analysis, but also look into the data flows inside the application. We particularly focus on the privacy leakage caused by third-party services and high-risk permissions. The proposed automated approach will combine taint analysis, permission analysis, network traffic analysis, and dynamic function tracing during run-time to identify private information leaks. We further implement an automatic validation and complementation process to reduce false positives. A small-scale experiment has been conducted on 30 Android applications and a large-scale experiment on more than 10,000 Android applications is in progress.
Ami, Amit Seal, Kafle, Kaushal, Nadkarni, Adwait, Poshyvanyk, Denys, Moran, Kevin.  2021.  µSE: Mutation-Based Evaluation of Security-Focused Static Analysis Tools for Android. 2021 IEEE/ACM 43rd International Conference on Software Engineering: Companion Proceedings (ICSE-Companion). :53—56.
This demo paper presents the technical details and usage scenarios of μSE: a mutation-based tool for evaluating security-focused static analysis tools for Android. Mutation testing is generally used by software practitioners to assess the robustness of a given test-suite. However, we leverage this technique to systematically evaluate static analysis tools and uncover and document soundness issues.μSE's analysis has found 25 previously undocumented flaws in static data leak detection tools for Android.μSE offers four mutation schemes, namely Reachability, Complex-reachability, TaintSink, and ScopeSink, which determine the locations of seeded mutants. Furthermore, the user can extend μSE by customizing the API calls targeted by the mutation analysis.μSE is also practical, as it makes use of filtering techniques based on compilation and execution criteria that reduces the number of ineffective mutations.
2022-07-15
Nguyen, Phuong T., Di Sipio, Claudio, Di Rocco, Juri, Di Penta, Massimiliano, Di Ruscio, Davide.  2021.  Adversarial Attacks to API Recommender Systems: Time to Wake Up and Smell the Coffee? 2021 36th IEEE/ACM International Conference on Automated Software Engineering (ASE). :253—265.
Recommender systems in software engineering provide developers with a wide range of valuable items to help them complete their tasks. Among others, API recommender systems have gained momentum in recent years as they became more successful at suggesting API calls or code snippets. While these systems have proven to be effective in terms of prediction accuracy, there has been less attention for what concerns such recommenders’ resilience against adversarial attempts. In fact, by crafting the recommenders’ learning material, e.g., data from large open-source software (OSS) repositories, hostile users may succeed in injecting malicious data, putting at risk the software clients adopting API recommender systems. In this paper, we present an empirical investigation of adversarial machine learning techniques and their possible influence on recommender systems. The evaluation performed on three state-of-the-art API recommender systems reveals a worrying outcome: all of them are not immune to malicious data. The obtained result triggers the need for effective countermeasures to protect recommender systems against hostile attacks disguised in training data.
2022-06-10
Bures, Tomas, Gerostathopoulos, Ilias, Hnětynka, Petr, Seifermann, Stephan, Walter, Maximilian, Heinrich, Robert.  2021.  Aspect-Oriented Adaptation of Access Control Rules. 2021 47th Euromicro Conference on Software Engineering and Advanced Applications (SEAA). :363–370.
Cyber-physical systems (CPS) and IoT systems are nowadays commonly designed as self-adaptive, endowing them with the ability to dynamically reconFigure to reflect their changing environment. This adaptation concerns also the security, as one of the most important properties of these systems. Though the state of the art on adaptivity in terms of security related to these systems can often deal well with fully anticipated situations in the environment, it becomes a challenge to deal with situations that are not or only partially anticipated. This uncertainty is however omnipresent in these systems due to humans in the loop, open-endedness and only partial understanding of the processes happening in the environment. In this paper, we partially address this challenge by featuring an approach for tackling access control in face of partially unanticipated situations. We base our solution on special kind of aspects that build on existing access control system and create a second level of adaptation that addresses the partially unanticipated situations by modifying access control rules. The approach is based on our previous work where we have analyzed and classified uncertainty in security and trust in such systems and have outlined the idea of access-control related situational patterns. The aspects that we present in this paper serve as means for application-specific specialization of the situational patterns. We showcase our approach on a simplified but real-life example in the domain of Industry 4.0 that comes from one of our industrial projects.
2022-04-25
Li, Yuezun, Zhang, Cong, Sun, Pu, Ke, Lipeng, Ju, Yan, Qi, Honggang, Lyu, Siwei.  2021.  DeepFake-o-meter: An Open Platform for DeepFake Detection. 2021 IEEE Security and Privacy Workshops (SPW). :277–281.
In recent years, the advent of deep learning-based techniques and the significant reduction in the cost of computation resulted in the feasibility of creating realistic videos of human faces, commonly known as DeepFakes. The availability of open-source tools to create DeepFakes poses as a threat to the trustworthiness of the online media. In this work, we develop an open-source online platform, known as DeepFake-o-meter, that integrates state-of-the-art DeepFake detection methods and provide a convenient interface for the users. We describe the design and function of DeepFake-o-meter in this work.
2022-04-19
Wang, Pei, Bangert, Julian, Kern, Christoph.  2021.  If It’s Not Secure, It Should Not Compile: Preventing DOM-Based XSS in Large-Scale Web Development with API Hardening. 2021 IEEE/ACM 43rd International Conference on Software Engineering (ICSE). :1360–1372.
With tons of efforts spent on its mitigation, Cross-site scripting (XSS) remains one of the most prevalent security threats on the internet. Decades of exploitation and remediation demonstrated that code inspection and testing alone does not eliminate XSS vulnerabilities in complex web applications with a high degree of confidence. This paper introduces Google's secure-by-design engineering paradigm that effectively prevents DOM-based XSS vulnerabilities in large-scale web development. Our approach, named API hardening, enforces a series of company-wide secure coding practices. We provide a set of secure APIs to replace native DOM APIs that are prone to XSS vulnerabilities. Through a combination of type contracts and appropriate validation and escaping, the secure APIs ensure that applications based thereon are free of XSS vulnerabilities. We deploy a simple yet capable compile-time checker to guarantee that developers exclusively use our hardened APIs to interact with the DOM. We make various of efforts to scale this approach to tens of thousands of engineers without significant productivity impact. By offering rigorous tooling and consultant support, we help developers adopt the secure coding practices as seamlessly as possible. We present empirical results showing how API hardening has helped reduce the occurrences of XSS vulnerabilities in Google's enormous code base over the course of two-year deployment.
2022-04-18
Paul, Rajshakhar, Turzo, Asif Kamal, Bosu, Amiangshu.  2021.  Why Security Defects Go Unnoticed During Code Reviews? A Case-Control Study of the Chromium OS Project 2021 IEEE/ACM 43rd International Conference on Software Engineering (ICSE). :1373–1385.
Peer code review has been found to be effective in identifying security vulnerabilities. However, despite practicing mandatory code reviews, many Open Source Software (OSS) projects still encounter a large number of post-release security vulnerabilities, as some security defects escape those. Therefore, a project manager may wonder if there was any weakness or inconsistency during a code review that missed a security vulnerability. Answers to this question may help a manager pinpointing areas of concern and taking measures to improve the effectiveness of his/her project's code reviews in identifying security defects. Therefore, this study aims to identify the factors that differentiate code reviews that successfully identified security defects from those that missed such defects. With this goal, we conduct a case-control study of Chromium OS project. Using multi-stage semi-automated approaches, we build a dataset of 516 code reviews that successfully identified security defects and 374 code reviews where security defects escaped. The results of our empirical study suggest that the are significant differences between the categories of security defects that are identified and that are missed during code reviews. A logistic regression model fitted on our dataset achieved an AUC score of 0.91 and has identified nine code review attributes that influence identifications of security defects. While time to complete a review, the number of mutual reviews between two developers, and if the review is for a bug fix have positive impacts on vulnerability identification, opposite effects are observed from the number of directories under review, the number of total reviews by a developer, and the total number of prior commits for the file under review.
2022-03-22
Xu, Ben, Liu, Jun.  2021.  False Data Detection Based On LSTM Network In Smart Grid. 2021 4th International Conference on Advanced Electronic Materials, Computers and Software Engineering (AEMCSE). :314—317.
In contrast to traditional grids, smart grids can help utilities save energy, thereby reducing operating costs. In the smart grid, the quality of monitoring and control can be fully improved by combining computing and intelligent communication knowledge. However, this will expose the system to FDI attacks, and the system is vulnerable to intrusion. Therefore, it is very important to detect such erroneous data injection attacks and provide an algorithm to protect the system from such attacks. In this paper, a FDI detection method based on LSTM has been proposed, which is validated by the simulation on the ieee-14 bus platform.
2022-03-15
Keshani, Mehdi.  2021.  Scalable Call Graph Constructor for Maven. 2021 IEEE/ACM 43rd International Conference on Software Engineering: Companion Proceedings (ICSE-Companion). :99—101.
As a rich source of data, Call Graphs are used for various applications including security vulnerability detection. Despite multiple studies showing that Call Graphs can drastically improve the accuracy of analysis, existing ecosystem-scale tools like Dependabot do not use Call Graphs and work at the package-level. Using Call Graphs in ecosystem use cases is not practical because of the scalability problems that Call Graph generators have. Call Graph generation is usually considered to be a "full program analysis" resulting in large Call Graphs and expensive computation. To make an analysis applicable to ecosystem scale, this pragmatic approach does not work, because the number of possible combinations of how a particular artifact can be combined in a full program explodes. Therefore, it is necessary to make the analysis incremental. There are existing studies on different types of incremental program analysis. However, none of them focuses on Call Graph generation for an entire ecosystem. In this paper, we propose an incremental implementation of the CHA algorithm that can generate Call Graphs on-demand, by stitching together partial Call Graphs that have been extracted for libraries before. Our preliminary evaluation results show that the proposed approach scales well and outperforms the most scalable existing framework called OPAL.
Baluta, Teodora, Chua, Zheng Leong, Meel, Kuldeep S., Saxena, Prateek.  2021.  Scalable Quantitative Verification for Deep Neural Networks. 2021 IEEE/ACM 43rd International Conference on Software Engineering: Companion Proceedings (ICSE-Companion). :248—249.
Despite the functional success of deep neural networks (DNNs), their trustworthiness remains a crucial open challenge. To address this challenge, both testing and verification techniques have been proposed. But these existing techniques pro- vide either scalability to large networks or formal guarantees, not both. In this paper, we propose a scalable quantitative verification framework for deep neural networks, i.e., a test-driven approach that comes with formal guarantees that a desired probabilistic property is satisfied. Our technique performs enough tests until soundness of a formal probabilistic property can be proven. It can be used to certify properties of both deterministic and randomized DNNs. We implement our approach in a tool called PROVERO1 and apply it in the context of certifying adversarial robustness of DNNs. In this context, we first show a new attack- agnostic measure of robustness which offers an alternative to purely attack-based methodology of evaluating robustness being reported today. Second, PROVERO provides certificates of robustness for large DNNs, where existing state-of-the-art verification tools fail to produce conclusive results. Our work paves the way forward for verifying properties of distributions captured by real-world deep neural networks, with provable guarantees, even where testers only have black-box access to the neural network.
2022-02-07
Han, Sung-Hwa.  2021.  Analysis of Data Transforming Technology for Malware Detection. 2021 21st ACIS International Winter Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD-Winter). :224–229.
As AI technology advances and its use increases, efforts to incorporate machine learning for malware detection are increasing. However, for malware learning, a standardized data set is required. Because malware is unstructured data, it cannot be directly learned. In order to solve this problem, many studies have attempted to convert unstructured data into structured data. In this study, the features and limitations of each were analyzed by investigating and analyzing the method of converting unstructured data proposed in each study into structured data. As a result, most of the data conversion techniques suggest conversion mechanisms, but the scope of each technique has not been determined. The resulting data set is not suitable for use as training data because it has infinite properties.
2022-02-04
Zhang, Mingyue.  2021.  System Component-Level Self-Adaptations for Security via Bayesian Games. 2021 IEEE/ACM 43rd International Conference on Software Engineering: Companion Proceedings (ICSE-Companion). :102–104.

Security attacks present unique challenges to self-adaptive system design due to the adversarial nature of the environment. However, modeling the system as a single player, as done in prior works in security domain, is insufficient for the system under partial compromise and for the design of fine-grained defensive strategies where the rest of the system with autonomy can cooperate to mitigate the impact of attacks. To deal with such issues, we propose a new self-adaptive framework incorporating Bayesian game and model the defender (i.e., the system) at the granularity of components in system architecture. The system architecture model is translated into a Bayesian multi-player game, where each component is modeled as an independent player while security attacks are encoded as variant types for the components. The defensive strategy for the system is dynamically computed by solving the pure equilibrium to achieve the best possible system utility, improving the resiliency of the system against security attacks.

2022-01-31
Stevens, Clay, Soundy, Jared, Chan, Hau.  2021.  Exploring the Efficiency of Self-Organizing Software Teams with Game Theory. 2021 IEEE/ACM 43rd International Conference on Software Engineering: New Ideas and Emerging Results (ICSE-NIER). :36–40.
Over the last two decades, software development has moved away from centralized, plan-based management toward agile methodologies such as Scrum. Agile methodologies are founded on a shared set of core principles, including self-organizing software development teams. Such teams are promoted as a way to increase both developer productivity and team morale, which is echoed by academic research. However, recent works on agile neglect to consider strategic behavior among developers, particularly during task assignment-one of the primary functions of a self-organizing team. This paper argues that self-organizing software teams could be readily modeled using game theory, providing insight into how agile developers may act when behaving strategically. We support our argument by presenting a general model for self-assignment of development tasks based on and extending concepts drawn from established game theory research. We further introduce the software engineering community to two metrics drawn from game theory-the price-of-stability and price-of-anarchy-which can be used to gauge the efficiencies of self-organizing teams compared to centralized management. We demonstrate how these metrics can be used in a case study evaluating the hypothesis that smaller teams self-organize more efficiently than larger teams, with conditional support for that hypothesis. Our game-theoretic framework provides new perspective for the software engineering community, opening many avenues for future research.
Freire, Sávio, Rios, Nicolli, Pérez, Boris, Castellanos, Camilo, Correal, Darío, Ramač, Robert, Mandić, Vladimir, Taušan, Nebojša, López, Gustavo, Pacheco, Alexia et al..  2021.  How Experience Impacts Practitioners' Perception of Causes and Effects of Technical Debt. 2021 IEEE/ACM 13th International Workshop on Cooperative and Human Aspects of Software Engineering (CHASE). :21–30.
Context: The technical debt (TD) metaphor helps to conceptualize the pending issues and trade-offs made during software development. Knowing TD causes can support in defining preventive actions and having information about effects aids in the prioritization of TD payment. Goal: To investigate the impact of the experience level on how practitioners perceive the most likely causes that lead to TD and the effects of TD that have the highest impacts on software projects. Method: We approach this topic by surveying 227 practitioners. Results: While experienced software developers focus on human factors as TD causes and external quality attributes as TD effects, low experienced developers seem to concentrate on technical issues as causes and internal quality issues and increased project effort as effects. Missing any of these types of causes could lead a team to miss the identification of important TD, or miss opportunities to preempt TD. On the other hand, missing important effects could hamper effective planning or erode the effectiveness of decisions about prioritizing TD items. Conclusion: Having software development teams composed of practitioners with a homogeneous experience level can erode the team's ability to effectively manage TD.
Velez, Miguel, Jamshidi, Pooyan, Siegmund, Norbert, Apel, Sven, Kästner, Christian.  2021.  White-Box Analysis over Machine Learning: Modeling Performance of Configurable Systems. 2021 IEEE/ACM 43rd International Conference on Software Engineering (ICSE). :1072–1084.

Performance-influence models can help stakeholders understand how and where configuration options and their interactions influence the performance of a system. With this understanding, stakeholders can debug performance behavior and make deliberate configuration decisions. Current black-box techniques to build such models combine various sampling and learning strategies, resulting in tradeoffs between measurement effort, accuracy, and interpretability. We present Comprex, a white-box approach to build performance-influence models for configurable systems, combining insights of local measurements, dynamic taint analysis to track options in the implementation, compositionality, and compression of the configuration space, without relying on machine learning to extrapolate incomplete samples. Our evaluation on 4 widely-used, open-source projects demonstrates that Comprex builds similarly accurate performance-influence models to the most accurate and expensive black-box approach, but at a reduced cost and with additional benefits from interpretable and local models.

2022-01-25
de Atocha Sosa Jiménez, Eduardo Joel, Aguilar Vera, Raúl A., López Martínez, José Luis, Díaz Mendoza, Julio C..  2021.  Methodological Proposal for the development of Computerized Educational Materials based on Augmented Reality. 2021 Mexican International Conference on Computer Science (ENC). :1—6.
This article describes a research work in progress, in which a methodology for the development of computerized educational materials based on augmented reality is proposed. The development of the proposal is preceded by a systematic review of the literature in which the convenience of having a methodology that assists teachers and developers interested in the development of educational materials related to augmented reality technology is concluded. The proposed methodology consists of four stages: (1) initiation, (2) design of the learning scenario, (3) implementation and (4) evaluation, as well as specific elements that must be considered in each of them for their correct fulfillment. Finally, the article briefly describes the validation strategy designed to evaluate this methodological proposal.
2021-12-20
Wang, Pei, Bangert, Julian, Kern, Christoph.  2021.  If It's Not Secure, It Should Not Compile: Preventing DOM-Based XSS in Large-Scale Web Development with API Hardening. 2021 IEEE/ACM 43rd International Conference on Software Engineering (ICSE). :1360–1372.
With tons of efforts spent on its mitigation, Cross-site scripting (XSS) remains one of the most prevalent security threats on the internet. Decades of exploitation and remediation demonstrated that code inspection and testing alone does not eliminate XSS vulnerabilities in complex web applications with a high degree of confidence. This paper introduces Google's secure-by-design engineering paradigm that effectively prevents DOM-based XSS vulnerabilities in large-scale web development. Our approach, named API hardening, enforces a series of company-wide secure coding practices. We provide a set of secure APIs to replace native DOM APIs that are prone to XSS vulnerabilities. Through a combination of type contracts and appropriate validation and escaping, the secure APIs ensure that applications based thereon are free of XSS vulnerabilities. We deploy a simple yet capable compile-time checker to guarantee that developers exclusively use our hardened APIs to interact with the DOM. We make various of efforts to scale this approach to tens of thousands of engineers without significant productivity impact. By offering rigorous tooling and consultant support, we help developers adopt the secure coding practices as seamlessly as possible. We present empirical results showing how API hardening has helped reduce the occurrences of XSS vulnerabilities in Google's enormous code base over the course of two-year deployment.
2021-11-29
Hough, Katherine, Welearegai, Gebrehiwet, Hammer, Christian, Bell, Jonathan.  2020.  Revealing Injection Vulnerabilities by Leveraging Existing Tests. 2020 IEEE/ACM 42nd International Conference on Software Engineering (ICSE). :284–296.
Code injection attacks, like the one used in the high-profile 2017 Equifax breach, have become increasingly common, now ranking \#1 on OWASP's list of critical web application vulnerabilities. Static analyses for detecting these vulnerabilities can overwhelm developers with false positive reports. Meanwhile, most dynamic analyses rely on detecting vulnerabilities as they occur in the field, which can introduce a high performance overhead in production code. This paper describes a new approach for detecting injection vulnerabilities in applications by harnessing the combined power of human developers' test suites and automated dynamic analysis. Our new approach, Rivulet, monitors the execution of developer-written functional tests in order to detect information flows that may be vulnerable to attack. Then, Rivulet uses a white-box test generation technique to repurpose those functional tests to check if any vulnerable flow could be exploited. When applied to the version of Apache Struts exploited in the 2017 Equifax attack, Rivulet quickly identifies the vulnerability, leveraging only the tests that existed in Struts at that time. We compared Rivulet to the state-of-the-art static vulnerability detector Julia on benchmarks, finding that Rivulet outperformed Julia in both false positives and false negatives. We also used Rivulet to detect new vulnerabilities.
Fu, Xiaoqin, Cai, Haipeng.  2020.  Scaling Application-Level Dynamic Taint Analysis to Enterprise-Scale Distributed Systems. 2020 IEEE/ACM 42nd International Conference on Software Engineering: Companion Proceedings (ICSE-Companion). :270–271.
With the increasing deployment of enterprise-scale distributed systems, effective and practical defenses for such systems against various security vulnerabilities such as sensitive data leaks are urgently needed. However, most existing solutions are limited to centralized programs. For real-world distributed systems which are of large scales, current solutions commonly face one or more of scalability, applicability, and portability challenges. To overcome these challenges, we develop a novel dynamic taint analysis for enterprise-scale distributed systems. To achieve scalability, we use a multi-phase analysis strategy to reduce the overall cost. We infer implicit dependencies via partial-ordering method events in distributed programs to address the applicability challenge. To achieve greater portability, the analysis is designed to work at an application level without customizing platforms. Empirical results have shown promising scalability and capabilities of our approach.
2021-11-08
Brown, Brandon, Richardson, Alexicia, Smith, Marcellus, Dozier, Gerry, King, Michael C..  2020.  The Adversarial UFP/UFN Attack: A New Threat to ML-based Fake News Detection Systems? 2020 IEEE Symposium Series on Computational Intelligence (SSCI). :1523–1527.
In this paper, we propose two new attacks: the Adversarial Universal False Positive (UFP) Attack and the Adversarial Universal False Negative (UFN) Attack. The objective of this research is to introduce a new class of attack using only feature vector information. The results show the potential weaknesses of five machine learning (ML) classifiers. These classifiers include k-Nearest Neighbor (KNN), Naive Bayes (NB), Random Forrest (RF), a Support Vector Machine (SVM) with a Radial Basis Function (RBF) Kernel, and XGBoost (XGB).
Liu, Qian, de Simone, Robert, Chen, Xiaohong, Kang, Jiexiang, Liu, Jing, Yin, Wei, Wang, Hui.  2020.  Multiform Logical Time Amp; Space for Mobile Cyber-Physical System With Automated Driving Assistance System. 2020 27th Asia-Pacific Software Engineering Conference (APSEC). :415–424.
We study the use of Multiform Logical Time, as embodied in Esterel/SyncCharts and Clock Constraint Specification Language (CCSL), for the specification of assume-guarantee constraints providing safe driving rules related to time and space, in the context of Automated Driving Assistance Systems (ADAS). The main novelty lies in the use of logical clocks to represent the epochs of specific area encounters (when particular area trajectories just start overlapping for instance), thereby combining time and space constraints by CCSL to build safe driving rules specification. We propose the safe specification pattern at high-level that provide the required expressiveness for safe driving rules specification. In the pattern, multiform logical time provides the power of parameterization to express safe driving rules, before instantiation in further simulation contexts. We present an efficient way to irregularly update the constraints in the specification due to the context changes, where elements (other cars, road sections, traffic signs) may dynamically enter and exit the scene. In this way, we add constraints for the new elements and remove the constraints related to the disappearing elements rather than rebuild everything. The multi-lane highway scenario is used to illustrate how to irregularly and efficiently update the constraints in the specification while receiving a fresh scene.
2021-10-12
Dong, Sichen, Jiao, Jian, Li, Shuyu.  2020.  A Multiple-Replica Provable Data Possession Algorithm Based on Branch Authentication Tree. 2020 IEEE 11th International Conference on Software Engineering and Service Science (ICSESS). :400–404.
The following topics are dealt with: learning (artificial intelligence); neural nets; feature extraction; pattern classification; convolutional neural nets; computer network security; security of data; recurrent neural nets; data privacy; and cloud computing.
Sultana, Kazi Zakia, Codabux, Zadia, Williams, Byron.  2020.  Examining the Relationship of Code and Architectural Smells with Software Vulnerabilities. 2020 27th Asia-Pacific Software Engineering Conference (APSEC). :31–40.
Context: Security is vital to software developed for commercial or personal use. Although more organizations are realizing the importance of applying secure coding practices, in many of them, security concerns are not known or addressed until a security failure occurs. The root cause of security failures is vulnerable code. While metrics have been used to predict software vulnerabilities, we explore the relationship between code and architectural smells with security weaknesses. As smells are surface indicators of a deeper problem in software, determining the relationship between smells and software vulnerabilities can play a significant role in vulnerability prediction models. Objective: This study explores the relationship between smells and software vulnerabilities to identify the smells. Method: We extracted the class, method, file, and package level smells for three systems: Apache Tomcat, Apache CXF, and Android. We then compared their occurrences in the vulnerable classes which were reported to contain vulnerable code and in the neutral classes (non-vulnerable classes where no vulnerability had yet been reported). Results: We found that a vulnerable class is more likely to have certain smells compared to a non-vulnerable class. God Class, Complex Class, Large Class, Data Class, Feature Envy, Brain Class have a statistically significant relationship with software vulnerabilities. We found no significant relationship between architectural smells and software vulnerabilities. Conclusion: We can conclude that for all the systems examined, there is a statistically significant correlation between software vulnerabilities and some smells.