Visible to the public Biblio

Found 632 results

Filters: Keyword is Computational modeling  [Clear All Filters]
2023-02-02
Dang, Fangfang, Yan, Lijing, Li, Shuai, Li, Dingding.  2022.  Trusted Dynamic Threshold Caculation Method in Power IoT. 2022 14th International Conference on Communication Software and Networks (ICCSN). :19–22.
Smart grid is a new generation of grid that inte-grates traditional grid and grid information system, and infor-mation security of smart grid is an extremely important part of the whole grid. The research of trusted computing technology provides new ideas to protect the information security of the power grid. To address the problem of large deviations in the calculation of credible dynamic thresholds due to the existence of characteristics such as self-similarity and traffic bursts in smart grid information collection, a traffic prediction model based on ARMA and Poisson process is proposed. And the Hurst coefficient is determined more accurately using R/S analysis, which finally improves the efficiency and accuracy of the trusted dynamic threshold calculation.
Mariotti, Francesco, Tavanti, Matteo, Montecchi, Leonardo, Lollini, Paolo.  2022.  Extending a security ontology framework to model CAPEC attack paths and TAL adversary profiles. 2022 18th European Dependable Computing Conference (EDCC). :25–32.
Security evaluation can be performed using a variety of analysis methods, such as attack trees, attack graphs, threat propagation models, stochastic Petri nets, and so on. These methods analyze the effect of attacks on the system, and estimate security attributes from different perspectives. However, they require information from experts in the application domain for properly capturing the key elements of an attack scenario: i) the attack paths a system could be subject to, and ii) the different characteristics of the possible adversaries. For this reason, some recent works focused on the generation of low-level security models from a high-level description of the system, hiding the technical details from the modeler.In this paper we build on an existing ontology framework for security analysis, available in the ADVISE Meta tool, and we extend it in two directions: i) to cover the attack patterns available in the CAPEC database, a comprehensive dictionary of known patterns of attack, and ii) to capture all the adversaries’ profiles as defined in the Threat Agent Library (TAL), a reference library for defining the characteristics of external and internal threat agents ranging from industrial spies to untrained employees. The proposed extension supports a richer combination of adversaries’ profiles and attack paths, and provides guidance on how to further enrich the ontology based on taxonomies of attacks and adversaries.
Saarinen, Markku-Juhani O..  2022.  SP 800–22 and GM/T 0005–2012 Tests: Clearly Obsolete, Possibly Harmful. 2022 IEEE European Symposium on Security and Privacy Workshops (EuroS&PW). :31–37.
When it comes to cryptographic random number generation, poor understanding of the security requirements and “mythical aura” of black-box statistical testing frequently leads it to be used as a substitute for cryptanalysis. To make things worse, a seemingly standard document, NIST SP 800–22, describes 15 statistical tests and suggests that they can be used to evaluate random and pseudorandom number generators in cryptographic applications. The Chi-nese standard GM/T 0005–2012 describes similar tests. These documents have not aged well. The weakest pseudorandom number generators will easily pass these tests, promoting false confidence in insecure systems. We strongly suggest that SP 800–22 be withdrawn by NIST; we consider it to be not just irrelevant but actively harmful. We illustrate this by discussing the “reference generators” contained in the SP 800–22 document itself. None of these generators are suitable for modern cryptography, yet they pass the tests. For future development, we suggest focusing on stochastic modeling of entropy sources instead of model-free statistical tests. Random bit generators should also be reviewed for potential asymmetric backdoors via trapdoor one-way functions, and for security against quantum computing attacks.
Oakley, Lisa, Oprea, Alina, Tripakis, Stavros.  2022.  Adversarial Robustness Verification and Attack Synthesis in Stochastic Systems. 2022 IEEE 35th Computer Security Foundations Symposium (CSF). :380–395.
Probabilistic model checking is a useful technique for specifying and verifying properties of stochastic systems including randomized protocols and reinforcement learning models. However, these methods rely on the assumed structure and probabilities of certain system transitions. These assumptions may be incorrect, and may even be violated by an adversary who gains control of some system components. In this paper, we develop a formal framework for adversarial robustness in systems modeled as discrete time Markov chains (DTMCs). We base our framework on existing methods for verifying probabilistic temporal logic properties and extend it to include deterministic, memoryless policies acting in Markov decision processes (MDPs). Our framework includes a flexible approach for specifying structure-preserving and non structure-preserving adversarial models. We outline a class of threat models under which adversaries can perturb system transitions, constrained by an ε ball around the original transition probabilities. We define three main DTMC adversarial robustness problems: adversarial robustness verification, maximal δ synthesis, and worst case attack synthesis. We present two optimization-based solutions to these three problems, leveraging traditional and parametric probabilistic model checking techniques. We then evaluate our solutions on two stochastic protocols and a collection of Grid World case studies, which model an agent acting in an environment described as an MDP. We find that the parametric solution results in fast computation for small parameter spaces. In the case of less restrictive (stronger) adversaries, the number of parameters increases, and directly computing property satisfaction probabilities is more scalable. We demonstrate the usefulness of our definitions and solutions by comparing system outcomes over various properties, threat models, and case studies.
Aggarwal, Naman, Aggarwal, Pradyuman, Gupta, Rahul.  2022.  Static Malware Analysis using PE Header files API. 2022 6th International Conference on Computing Methodologies and Communication (ICCMC). :159–162.
In today’s fast pacing world, cybercrimes have time and again proved to be one of the biggest hindrances in national development. According to recent trends, most of the times the victim’s data is breached by trapping it in a phishing attack. Security and privacy of user’s data has become a matter of tremendous concern. In order to address this problem and to protect the naive user’s data, a tool which may help to identify whether a window executable is malicious or not by doing static analysis on it has been proposed. As well as a comparative study has been performed by implementing different classification models like Logistic Regression, Neural Network, SVM. The static analysis approach used takes into parameters of the executables, analysis of properties obtained from PE Section Headers i.e. API calls. Comparing different model will provide the best model to be used for static malware analysis
2023-01-20
Kim, Yeongwoo, Dán, György.  2022.  An Active Learning Approach to Dynamic Alert Prioritization for Real-time Situational Awareness. 2022 IEEE Conference on Communications and Network Security (CNS). :154–162.

Real-time situational awareness (SA) plays an essential role in accurate and timely incident response. Maintaining SA is, however, extremely costly due to excessive false alerts generated by intrusion detection systems, which require prioritization and manual investigation by security analysts. In this paper, we propose a novel approach to prioritizing alerts so as to maximize SA, by formulating the problem as that of active learning in a hidden Markov model (HMM). We propose to use the entropy of the belief of the security state as a proxy for the mean squared error (MSE) of the belief, and we develop two computationally tractable policies for choosing alerts to investigate that minimize the entropy, taking into account the potential uncertainty of the investigations' results. We use simulations to compare our policies to a variety of baseline policies. We find that our policies reduce the MSE of the belief of the security state by up to 50% compared to static baseline policies, and they are robust to high false alert rates and to the investigation errors.

G, Emayashri, R, Harini, V, Abirami S, M, Benedict Tephila.  2022.  Electricity-Theft Detection in Smart Grids Using Wireless Sensor Networks. 2022 8th International Conference on Advanced Computing and Communication Systems (ICACCS). 1:2033—2036.
Satisfying the growing demand for electricity is a huge challenge for electricity providers without a robust and good infrastructure. For effective electricity management, the infrastructure has to be strengthened from the generation stage to the transmission and distribution stages. In the current electrical infrastructure, the evolution of smart grids provides a significant solution to the problems that exist in the conventional system. Enhanced management visibility and better monitoring and control are achieved by the integration of wireless sensor network technology in communication systems. However, to implement these solutions in the existing grids, the infrastructural constraints impose a major challenge. Along with the choice of technology, it is also crucial to avoid exorbitant implementation costs. This paper presents a self-stabilizing hierarchical algorithm for the existing electrical network. Neighborhood Area Networks (NAN) and Home Area Networks (HAN) layers are used in the proposed architecture. The Home Node (HN), Simple Node (SN) and Cluster Head (CH) are the three types of nodes used in the model. Fraudulent users in the system are identified efficiently using the proposed model based on the observations made through simulation on OMNeT++ simulator.
2023-01-13
Wu, Haijiang.  2022.  Effective Metrics Modeling of Big Data Technology in Electric Power Information Security. 2022 6th International Conference on Computing Methodologies and Communication (ICCMC). :607—610.
This article focuses on analyzing the application characteristics of electric power big data, determining the advantages that electric power big data provides to the development of enterprises, and expounding the power information security protection technology and management measures under the background of big data. Focus on the protection of power information security, and fundamentally control the information security control issues of power enterprises. Then analyzed the types of big data structure and effective measurement modeling, and finally combined with the application status of big data concepts in the construction of electric power information networks, and proposed optimization strategies, aiming to promote the effectiveness of big data concepts in power information network management activities. Applying the creation conditions, the results show that the measurement model is improved by 7.8%
Jin, Shipu.  2022.  Research on Computer Network Security Framework Based on Concurrent Data Detection and Security Modelling. 2022 International Conference on Sustainable Computing and Data Communication Systems (ICSCDS). :1144–1147.
A formal modeling language MCD for concurrent systems is proposed, and its syntax, semantics and formal definitions are given. MCD uses modules as basic components, and that the detection rules are not perfect, resulting in packets that do not belong to intrusion attacks being misjudged as attacks, respectively. Then the data detection algorithm based on MCD concurrency model protects hidden computer viruses and security threats, and the efficiency is increased by 7.5% Finally, the computer network security protection system is researched based on security modeling.
2023-01-06
Bogatyrev, Vladimir A., Bogatyrev, Stanislav V., Bogatyrev, Anatoly V..  2022.  Choosing the Discipline of Restoring Computer Systems with Acceptable Degradation with Consolidation of Node Resources Saved After Failures. 2022 International Conference on Information, Control, and Communication Technologies (ICCT). :1—4.
An approach to substantiating the choice of a discipline for the maintenance of a redundant computer system, with the possible use of node resources saved after failures, is considered. The choice is aimed at improving the reliability and profitability of the system, taking into account the operational costs of restoring nodes. Models of reliability of systems with service disciplines are proposed, providing both the possibility of immediate recovery of nodes after failures, and allowing degradation of the system when using node resources stored after failures in it. The models take into account the conditions of the admissibility or inadmissibility of the loss of information accumulated during the operation of the system. The operating costs are determined, taking into account the costs of restoring nodes for the system maintenance disciplines under consideration
Bogatyrev, Vladimir A., Bogatyrev, Stanislav V., Bogatyrev, Anatoly V..  2022.  Reliability and Timeliness of Servicing Requests in Infocommunication Systems, Taking into Account the Physical and Information Recovery of Redundant Storage Devices. 2022 International Conference on Information, Control, and Communication Technologies (ICCT). :1—4.
Markov models of reliability of fault-tolerant computer systems are proposed, taking into account two stages of recovery of redundant memory devices. At the first stage, the physical recovery of memory devices is implemented, and at the second, the informational one consists in entering the data necessary to perform the required functions. Memory redundancy is carried out to increase the stability of the system to the loss of unique data generated during the operation of the system. Data replication is implemented in all functional memory devices. Information recovery is carried out using replicas of data stored in working memory devices. The model takes into account the criticality of the system to the timeliness of calculations in real time and to the impossibility of restoring information after multiple memory failures, leading to the loss of all stored replicas of unique data. The system readiness coefficient and the probability of its transition to a non-recoverable state are determined. The readiness of the system for the timely execution of requests is evaluated, taking into account the influence of the shares of the distribution of the performance of the computer allocated for the maintenance of requests and for the entry of information into memory after its physical recovery.
Yu, Xiao, Wang, Dong, Sun, Xiaojuan, Zheng, Bingbing, Du, Yankai.  2022.  Design and Implementation of a Software Disaster Recovery Service for Cloud Computing-Based Aerospace Ground Systems. 2022 11th International Conference on Communications, Circuits and Systems (ICCCAS). :220—225.
The data centers of cloud computing-based aerospace ground systems and the businesses running on them are extremely vulnerable to man-made disasters, emergencies, and other disasters, which means security is seriously threatened. Thus, cloud centers need to provide effective disaster recovery services for software and data. However, the disaster recovery methods for current cloud centers of aerospace ground systems have long been in arrears, and the disaster tolerance and anti-destruction capability are weak. Aiming at the above problems, in this paper we design a disaster recovery service for aerospace ground systems based on cloud computing. On account of the software warehouse, this service adopts the main standby mode to achieve the backup, local disaster recovery, and remote disaster recovery of software and data. As a result, this service can timely response to the disasters, ensure the continuous running of businesses, and improve the disaster tolerance and anti-destruction capability of aerospace ground systems. Extensive simulation experiments validate the effectiveness of the disaster recovery service proposed in this paper.
Erbil, Pinar, Gursoy, M. Emre.  2022.  Detection and Mitigation of Targeted Data Poisoning Attacks in Federated Learning. 2022 IEEE Intl Conf on Dependable, Autonomic and Secure Computing, Intl Conf on Pervasive Intelligence and Computing, Intl Conf on Cloud and Big Data Computing, Intl Conf on Cyber Science and Technology Congress (DASC/PiCom/CBDCom/CyberSciTech). :1—8.
Federated learning (FL) has emerged as a promising paradigm for distributed training of machine learning models. In FL, several participants train a global model collaboratively by only sharing model parameter updates while keeping their training data local. However, FL was recently shown to be vulnerable to data poisoning attacks, in which malicious participants send parameter updates derived from poisoned training data. In this paper, we focus on defending against targeted data poisoning attacks, where the attacker’s goal is to make the model misbehave for a small subset of classes while the rest of the model is relatively unaffected. To defend against such attacks, we first propose a method called MAPPS for separating malicious updates from benign ones. Using MAPPS, we propose three methods for attack detection: MAPPS + X-Means, MAPPS + VAT, and their Ensemble. Then, we propose an attack mitigation approach in which a "clean" model (i.e., a model that is not negatively impacted by an attack) can be trained despite the existence of a poisoning attempt. We empirically evaluate all of our methods using popular image classification datasets. Results show that we can achieve \textgreater 95% true positive rates while incurring only \textless 2% false positive rate. Furthermore, the clean models that are trained using our proposed methods have accuracy comparable to models trained in an attack-free scenario.
Feng, Yu, Ma, Benteng, Zhang, Jing, Zhao, Shanshan, Xia, Yong, Tao, Dacheng.  2022.  FIBA: Frequency-Injection based Backdoor Attack in Medical Image Analysis. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). :20844—20853.
In recent years, the security of AI systems has drawn increasing research attention, especially in the medical imaging realm. To develop a secure medical image analysis (MIA) system, it is a must to study possible backdoor attacks (BAs), which can embed hidden malicious behaviors into the system. However, designing a unified BA method that can be applied to various MIA systems is challenging due to the diversity of imaging modalities (e.g., X-Ray, CT, and MRI) and analysis tasks (e.g., classification, detection, and segmentation). Most existing BA methods are designed to attack natural image classification models, which apply spatial triggers to training images and inevitably corrupt the semantics of poisoned pixels, leading to the failures of attacking dense prediction models. To address this issue, we propose a novel Frequency-Injection based Backdoor Attack method (FIBA) that is capable of delivering attacks in various MIA tasks. Specifically, FIBA leverages a trigger function in the frequency domain that can inject the low-frequency information of a trigger image into the poisoned image by linearly combining the spectral amplitude of both images. Since it preserves the semantics of the poisoned image pixels, FIBA can perform attacks on both classification and dense prediction models. Experiments on three benchmarks in MIA (i.e., ISIC-2019 [4] for skin lesion classification, KiTS-19 [17] for kidney tumor segmentation, and EAD-2019 [1] for endoscopic artifact detection), validate the effectiveness of FIBA and its superiority over stateof-the-art methods in attacking MIA models and bypassing backdoor defense. Source code will be available at code.
Zhu, Yanxu, Wen, Hong, Zhang, Peng, Han, Wen, Sun, Fan, Jia, Jia.  2022.  Poisoning Attack against Online Regression Learning with Maximum Loss for Edge Intelligence. 2022 International Conference on Computing, Communication, Perception and Quantum Technology (CCPQT). :169—173.
Recent trends in the convergence of edge computing and artificial intelligence (AI) have led to a new paradigm of “edge intelligence”, which are more vulnerable to attack such as data and model poisoning and evasion of attacks. This paper proposes a white-box poisoning attack against online regression model for edge intelligence environment, which aim to prepare the protection methods in the future. Firstly, the new method selects data points from original stream with maximum loss by two selection strategies; Secondly, it pollutes these points with gradient ascent strategy. At last, it injects polluted points into original stream being sent to target model to complete the attack process. We extensively evaluate our proposed attack on open dataset, the results of which demonstrate the effectiveness of the novel attack method and the real implications of poisoning attack in a case study electric energy prediction application.
Alotaibi, Jamal, Alazzawi, Lubna.  2022.  PPIoV: A Privacy Preserving-Based Framework for IoV- Fog Environment Using Federated Learning and Blockchain. 2022 IEEE World AI IoT Congress (AIIoT). :597—603.
The integration of the Internet-of-Vehicles (IoV) and fog computing benefits from cooperative computing and analysis of environmental data while avoiding network congestion and latency. However, when private data is shared across fog nodes or the cloud, there exist privacy issues that limit the effectiveness of IoV systems, putting drivers' safety at risk. To address this problem, we propose a framework called PPIoV, which is based on Federated Learning (FL) and Blockchain technologies to preserve the privacy of vehicles in IoV.Typical machine learning methods are not well suited for distributed and highly dynamic systems like IoV since they train on data with local features. Therefore, we use FL to train the global model while preserving privacy. Also, our approach is built on a scheme that evaluates the reliability of vehicles participating in the FL training process. Moreover, PPIoV is built on blockchain to establish trust across multiple communication nodes. For example, when the local learned model updates from the vehicles and fog nodes are communicated with the cloud to update the global learned model, all transactions take place on the blockchain. The outcome of our experimental study shows that the proposed method improves the global model's accuracy as a result of allowing reputed vehicles to update the global model.
Golatkar, Aditya, Achille, Alessandro, Wang, Yu-Xiang, Roth, Aaron, Kearns, Michael, Soatto, Stefano.  2022.  Mixed Differential Privacy in Computer Vision. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). :8366—8376.
We introduce AdaMix, an adaptive differentially private algorithm for training deep neural network classifiers using both private and public image data. While pre-training language models on large public datasets has enabled strong differential privacy (DP) guarantees with minor loss of accuracy, a similar practice yields punishing trade-offs in vision tasks. A few-shot or even zero-shot learning baseline that ignores private data can outperform fine-tuning on a large private dataset. AdaMix incorporates few-shot training, or cross-modal zero-shot learning, on public data prior to private fine-tuning, to improve the trade-off. AdaMix reduces the error increase from the non-private upper bound from the 167–311% of the baseline, on average across 6 datasets, to 68-92% depending on the desired privacy level selected by the user. AdaMix tackles the trade-off arising in visual classification, whereby the most privacy sensitive data, corresponding to isolated points in representation space, are also critical for high classification accuracy. In addition, AdaMix comes with strong theoretical privacy guarantees and convergence analysis.
Daughety, Nathan, Pendleton, Marcus, Perez, Rebeca, Xu, Shouhuai, Franco, John.  2022.  Auditing a Software-Defined Cross Domain Solution Architecture. 2022 IEEE International Conference on Cyber Security and Resilience (CSR). :96—103.
In the context of cybersecurity systems, trust is the firm belief that a system will behave as expected. Trustworthiness is the proven property of a system that is worthy of trust. Therefore, trust is ephemeral, i.e. trust can be broken; trustworthiness is perpetual, i.e. trustworthiness is verified and cannot be broken. The gap between these two concepts is one which is, alarmingly, often overlooked. In fact, the pressure to meet with the pace of operations for mission critical cross domain solution (CDS) development has resulted in a status quo of high-risk, ad hoc solutions. Trustworthiness, proven through formal verification, should be an essential property in any hardware and/or software security system. We have shown, in "vCDS: A Virtualized Cross Domain Solution Architecture", that developing a formally verified CDS is possible. virtual CDS (vCDS) additionally comes with security guarantees, i.e. confidentiality, integrity, and availability, through the use of a formally verified trusted computing base (TCB). In order for a system, defined by an architecture description language (ADL), to be considered trustworthy, the implemented security configuration, i.e. access control and data protection models, must be verified correct. In this paper we present the first and only security auditing tool which seeks to verify the security configuration of a CDS architecture defined through ADL description. This tool is useful in mitigating the risk of existing solutions by ensuring proper security enforcement. Furthermore, when coupled with the agile nature of vCDS, this tool significantly increases the pace of system delivery.
2023-01-05
Sarwar, Asima, Hasan, Salva, Khan, Waseem Ullah, Ahmed, Salman, Marwat, Safdar Nawaz Khan.  2022.  Design of an Advance Intrusion Detection System for IoT Networks. 2022 2nd International Conference on Artificial Intelligence (ICAI). :46–51.
The Internet of Things (IoT) is advancing technology by creating smart surroundings that make it easier for humans to do their work. This technological advancement not only improves human life and expands economic opportunities, but also allows intruders or attackers to discover and exploit numerous methods in order to circumvent the security of IoT networks. Hence, security and privacy are the key concerns to the IoT networks. It is vital to protect computer and IoT networks from many sorts of anomalies and attacks. Traditional intrusion detection systems (IDS) collect and employ large amounts of data with irrelevant and inappropriate attributes to train machine learning models, resulting in long detection times and a high rate of misclassification. This research presents an advance approach for the design of IDS for IoT networks based on the Particle Swarm Optimization Algorithm (PSO) for feature selection and the Extreme Gradient Boosting (XGB) model for PSO fitness function. The classifier utilized in the intrusion detection process is Random Forest (RF). The IoTID20 is being utilized to evaluate the efficacy and robustness of our suggested strategy. The proposed system attains the following level of accuracy on the IoTID20 dataset for different levels of classification: Binary classification 98 %, multiclass classification 83 %. The results indicate that the proposed framework effectively detects cyber threats and improves the security of IoT networks.
Wei, Lianghao, Cai, Zhaonian, Zhou, Kun.  2022.  Multi-objective Gray Wolf Optimization Algorithm for Multi-agent Pathfinding Problem. 2022 IEEE 5th International Conference on Electronics Technology (ICET). :1241–1249.
As a core problem of multi-agent systems, multiagent pathfinding has an important impact on the efficiency of multi-agent systems. Because of this, many novel multi-agent pathfinding methods have been proposed over the years. However, these methods have focused on different agents with different goals for research, and less research has been done on scenarios where different agents have the same goal. We propose a multiagent pathfinding method incorporating a multi-objective gray wolf optimization algorithm to solve the multi-agent pathfinding problem with the same objective. First, constrained optimization modeling is performed to obtain objective functions about agent wholeness and security. Then, the multi-objective gray wolf optimization algorithm is improved for solving the constrained optimization problem and further optimized for scenarios with insufficient computational resources. To verify the effectiveness of the multi-objective gray wolf optimization algorithm, we conduct experiments in a series of simulation environments and compare the improved multi-objective grey wolf optimization algorithm with some classical swarm intelligence optimization algorithms. The results show that the multi-agent pathfinding method incorporating the multi-objective gray wolf optimization algorithm is more efficient in handling multi-agent pathfinding problems with the same objective.
Sewak, Mohit, Sahay, Sanjay K., Rathore, Hemant.  2022.  X-Swarm: Adversarial DRL for Metamorphic Malware Swarm Generation. 2022 IEEE International Conference on Pervasive Computing and Communications Workshops and other Affiliated Events (PerCom Workshops). :169–174.
Advanced metamorphic malware and ransomware use techniques like obfuscation to alter their internal structure with every attack. Therefore, any signature extracted from such attack, and used to bolster endpoint defense, cannot avert subsequent attacks. Therefore, if even a single such malware intrudes even a single device of an IoT network, it will continue to infect the entire network. Scenarios where an entire network is targeted by a coordinated swarm of such malware is not beyond imagination. Therefore, the IoT era also requires Industry-4.0 grade AI-based solutions against such advanced attacks. But AI-based solutions need a large repository of data extracted from similar attacks to learn robust representations. Whereas, developing a metamorphic malware is a very complex task and requires extreme human ingenuity. Hence, there does not exist abundant metamorphic malware to train AI-based defensive solutions. Also, there is currently no system that could generate enough functionality preserving metamorphic variants of multiple malware to train AI-based defensive systems. Therefore, to this end, we design and develop a novel system, named X-Swarm. X-Swarm uses deep policy-based adversarial reinforcement learning to generate swarm of metamorphic instances of any malware by obfuscating them at the opcode level and ensuring that they could evade even capable, adversarial-attack immune endpoint defense systems.
Bansal, Lakshya, Chaurasia, Shefali, Sabharwal, Munish, Vij, Mohit.  2022.  Blockchain Integration with end-to-end traceability in the Food Supply Chain. 2022 2nd International Conference on Advance Computing and Innovative Technologies in Engineering (ICACITE). :1152—1156.
Food supply chain is a complex but necessary food production arrangement needed by the global community to maintain sustainability and food security. For the past few years, entities being a part of the food processing system have usually taken food supply chain for granted, they forget that just one disturbance in the chain can lead to poisoning, scarcity, or increased prices. This continually affects the vulnerable among society, including impoverished individuals and small restaurants/grocers. The food supply chain has been expanded across the globe involving many more entities, making the supply chain longer and more problematic making the traditional logistics pattern unable to match the expectations of customers. Food supply chains involve many challenges like lack of traceability and communication, supply of fraudulent food products and failure in monitoring warehouses. Therefore there is a need for a system that ensures authentic information about the product, a reliable trading mechanism. In this paper, we have proposed a comprehensive solution to make the supply chain consumer centric by using Blockchain. Blockchain technology in the food industry applies in a mindful and holistic manner to verify and certify the quality of food products by presenting authentic information about the products from the initial stages. The problem formulation, simulation and performance analysis are also discussed in this research work.
2022-12-20
Rakin, Adnan Siraj, Chowdhuryy, Md Hafizul Islam, Yao, Fan, Fan, Deliang.  2022.  DeepSteal: Advanced Model Extractions Leveraging Efficient Weight Stealing in Memories. 2022 IEEE Symposium on Security and Privacy (SP). :1157–1174.
Recent advancements in Deep Neural Networks (DNNs) have enabled widespread deployment in multiple security-sensitive domains. The need for resource-intensive training and the use of valuable domain-specific training data have made these models the top intellectual property (IP) for model owners. One of the major threats to DNN privacy is model extraction attacks where adversaries attempt to steal sensitive information in DNN models. In this work, we propose an advanced model extraction framework DeepSteal that steals DNN weights remotely for the first time with the aid of a memory side-channel attack. Our proposed DeepSteal comprises two key stages. Firstly, we develop a new weight bit information extraction method, called HammerLeak, through adopting the rowhammer-based fault technique as the information leakage vector. HammerLeak leverages several novel system-level techniques tailored for DNN applications to enable fast and efficient weight stealing. Secondly, we propose a novel substitute model training algorithm with Mean Clustering weight penalty, which leverages the partial leaked bit information effectively and generates a substitute prototype of the target victim model. We evaluate the proposed model extraction framework on three popular image datasets (e.g., CIFAR-10/100/GTSRB) and four DNN architectures (e.g., ResNet-18/34/Wide-ResNetNGG-11). The extracted substitute model has successfully achieved more than 90% test accuracy on deep residual networks for the CIFAR-10 dataset. Moreover, our extracted substitute model could also generate effective adversarial input samples to fool the victim model. Notably, it achieves similar performance (i.e., 1-2% test accuracy under attack) as white-box adversarial input attack (e.g., PGD/Trades).
ISSN: 2375-1207
Liu, Xiaolei, Li, Xiaoyu, Zheng, Desheng, Bai, Jiayu, Peng, Yu, Zhang, Shibin.  2022.  Automatic Selection Attacks Framework for Hard Label Black-Box Models. IEEE INFOCOM 2022 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS). :1–7.
The current adversarial attacks against machine learning models can be divided into white-box attacks and black-box attacks. Further the black-box can be subdivided into soft label and hard label black-box, but the latter has the deficiency of only returning the class with the highest prediction probability, which leads to the difficulty in gradient estimation. However, due to its wide application, it is of great research significance and application value to explore hard label blackbox attacks. This paper proposes an Automatic Selection Attacks Framework (ASAF) for hard label black-box models, which can be explained in two aspects based on the existing attack methods. Firstly, ASAF applies model equivalence to select substitute models automatically so as to generate adversarial examples and then completes black-box attacks based on their transferability. Secondly, specified feature selection and parallel attack method are proposed to shorten the attack time and improve the attack success rate. The experimental results show that ASAF can achieve more than 90% success rate of nontargeted attack on the common models of traditional dataset ResNet-101 (CIFAR10) and InceptionV4 (ImageNet). Meanwhile, compared with FGSM and other attack algorithms, the attack time is reduced by at least 89.7% and 87.8% respectively in two traditional datasets. Besides, it can achieve 90% success rate of attack on the online model, BaiduAI digital recognition. In conclusion, ASAF is the first automatic selection attacks framework for hard label blackbox models, in which specified feature selection and parallel attack methods speed up automatic attacks.
2022-12-09
Casimiro, Maria, Romano, Paolo, Garlan, David, Rodrigues, Luís.  2022.  Towards a Framework for Adapting Machine Learning Components. 2022 IEEE International Conference on Autonomic Computing and Self-Organizing Systems (ACSOS). :131—140.
Machine Learning (ML) models are now commonly used as components in systems. As any other component, ML components can produce erroneous outputs that may penalize system utility. In this context, self-adaptive systems emerge as a natural approach to cope with ML mispredictions, through the execution of adaptation tactics such as model retraining. To synthesize an adaptation strategy, the self-adaptation manager needs to reason about the cost-benefit tradeoffs of the applicable tactics, which is a non-trivial task for tactics such as model retraining, whose benefits are both context- and data-dependent.To address this challenge, this paper proposes a probabilistic modeling framework that supports automated reasoning about the cost/benefit tradeoffs associated with improving ML components of ML-based systems. The key idea of the proposed approach is to decouple the problems of (i) estimating the expected performance improvement after retrain and (ii) estimating the impact of ML improved predictions on overall system utility.We demonstrate the application of the proposed framework by using it to self-adapt a state-of-the-art ML-based fraud-detection system, which we evaluate using a publicly-available, real fraud detection dataset. We show that by predicting system utility stemming from retraining a ML component, the probabilistic model checker can generate adaptation strategies that are significantly closer to the optimal, as compared against baselines such as periodic retraining, or reactive retraining.