Visible to the public Biblio

Filters: Keyword is AI  [Clear All Filters]
Johri, Era, Dharod, Leesa, Joshi, Rasika, Kulkarni, Shreya, Kundle, Vaibhavi.  2022.  Video Captcha Proposition based on VQA, NLP, Deep Learning and Computer Vision. 2022 5th International Conference on Advances in Science and Technology (ICAST). :196–200.
Visual Question Answering or VQA is a technique used in diverse domains ranging from simple visual questions and answers on short videos to security. Here in this paper, we talk about the video captcha that will be deployed for user authentication. Randomly any short video of length 10 to 20 seconds will be displayed and automated questions and answers will be generated by the system using AI and ML. Automated Programs have maliciously affected gateways such as login, registering etc. Therefore, in today's environment it is necessary to deploy such security programs that can recognize the objects in a video and generate automated MCQs real time that can be of context like the object movements, color, background etc. The features in the video highlighted will be recorded for generating MCQs based on the short videos. These videos can be random in nature. They can be taken from any official websites or even from your own local computer with prior permission from the user. The format of the video must be kept as constant every time and must be cross checked before flashing it to the user. Once our system identifies the captcha and determines the authenticity of a user, the other website in which the user wants to login, can skip the step of captcha verification as it will be done by our system. A session will be maintained for the user, eliminating the hassle of authenticating themselves again and again for no reason. Once the video will be flashed for an IP address and if the answers marked by the user for the current video captcha are correct, we will add the information like the IP address, the video and the questions in our database to avoid repeating the same captcha for the same IP address. In this paper, we proposed the methodology of execution of the aforementioned and will discuss the benefits and limitations of video captcha along with the visual questions and answering.
Anderegg, Alfred H. Andy, Ferrell, Uma D..  2022.  Assurance Case Along a Safety Continuum. 2022 IEEE/AIAA 41st Digital Avionics Systems Conference (DASC). :1–10.
The FAA proposes Safety Continuum that recognizes the public expectation for safety outcomes vary with aviation sectors that have different missions, aircraft, and environments. The purpose is to align the rigor of oversight to the public expectations. An aircraft, its variants or derivatives may be used in operations with different expectations. The differences in mission might bring immutable risks for some applications that reuse or revise the original aircraft type design. The continuum enables a more agile design approval process for innovations in the context of a dynamic ecosystems, addressing the creation of variants for different sectors and needs. Since an aircraft type design can be reused in various operations under part 91 or 135 with different mission risks the assurance case will have many branches reflecting the variants and derivatives.This paper proposes a model for the holistic, performance-based, through-life safety assurance case that focuses applicant and oversight alike on achieving the safety outcomes. This paper describes the application of goal-based, technology-neutral features of performance-based assurance cases extending the philosophy of UL 4600, to the Safety Continuum. This paper specifically addresses component reuse including third-party vehicle modifications and changes to operational concept or eco-system. The performance-based assurance argument offers a way to combine the design approval more seamlessly with the oversight functions by focusing all aspects of the argument and practice together to manage the safety outcomes. The model provides the context to assure mitigated risk are consistent with an operation’s place on the safety continuum, while allowing the applicant to reuse parts of the assurance argument to innovate variants or derivatives. The focus on monitoring performance to constantly verify the safety argument complements compliance checking as a way to assure products are "fit-for-use". The paper explains how continued operational safety becomes a natural part of monitoring the assurance case for growing variety in a product line by accounting for the ecosystem changes. Such a model could be used with the Safety Continuum to promote applicant and operator accountability delivering the expected safety outcomes.
ISSN: 2155-7209
Ferrell, Uma D., Anderegg, Alfred H. Andy.  2022.  Holistic Assurance Case for System-of-Systems. 2022 IEEE/AIAA 41st Digital Avionics Systems Conference (DASC). :1–9.
Aviation is a highly sophisticated and complex System-of-Systems (SoSs) with equally complex safety oversight. As novel products with autonomous functions and interactions between component systems are adopted, the number of interdependencies within and among the SoS grows. These interactions may not always be obvious. Understanding how proposed products (component systems) fit into the context of a larger SoS is essential to promote the safe use of new as well as conventional technology.UL 4600, is a Standard for Safety for the Evaluation of Autonomous Products specifically written for completely autonomous Load vehicles. The goal-based, technology-neutral features of this standard make it adaptable to other industries and applications.This paper, using the philosophy of UL 4600, gives guidance for creating an assurance case for products in an SoS context. An assurance argument is a cogent structured argument concluding that an autonomous aircraft system possesses all applicable through-life performance and safety properties. The assurance case process can be repeated at each level in the SoS: aircraft, aircraft system, unmodified components, and modified components. The original Equipment Manufacturer (OEM) develops the assurance case for the whole aircraft envisioned in the type certification process. Assurance cases are continuously validated by collecting and analyzing Safety Performance Indicators (SPIs). SPIs provide predictive safety information, thus offering an opportunity to improve safety by preventing incidents and accidents. Continuous validation is essential for risk-based approval of autonomously evolving (dynamic) systems, learning systems, and new technology. System variants, derivatives, and components are captured in a subordinate assurance case by their developer. These variants of the assurance case inherently reflect the evolution of the vehicle-level derivatives and options in the context of their specific target ecosystem. These subordinate assurance cases are nested under the argument put forward by the OEM of components and aircraft, for certification credit.It has become a common practice in aviation to address design hazards through operational mitigations. It is also common for hazards noted in an aircraft component system to be mitigated within another component system. Where a component system depends on risk mitigation in another component of the SoS, organizational responsibilities must be stated explicitly in the assurance case. However, current practices do not formalize accounting for these dependencies by the parties responsible for design; consequently, subsequent modifications are made without the benefit of critical safety-related information from the OEMs. The resulting assurance cases, including 3rd party vehicle modifications, must be scrutinized as part of the holistic validation process.When changes are made to a product represented within the assurance case, their impact must be analyzed and reflected in an updated assurance case. An OEM can facilitate this by integrating affected assurance cases across their customer’s supply chains to ensure their validity. The OEM is expected to exercise the sphere-of-control over their product even if it includes outsourced components. Any organization that modifies a product (with or without assurance argumentation information from other suppliers) is accountable for validating the conditions for any dependent mitigations. For example, the OEM may manage the assurance argumentation by identifying requirements and supporting SPI that must be applied in all component assurance cases. For their part, component assurance cases must accommodate all spheres-of-control that mitigate the risks they present in their respective contexts. The assurance case must express how interdependent mitigations will collectively assure the outcome. These considerations are much more than interface requirements and include explicit hazard mitigation dependencies between SoS components. A properly integrated SoS assurance case reflects a set of interdependent systems that could be independently developed..Even in this extremely interconnected environment, stakeholders must make accommodations for the independent evolution of products in a manner that protects proprietary information, domain knowledge, and safety data. The collective safety outcome for the SoS is based on the interdependence of mitigations by each constituent component and could not be accomplished by any single component. This dependency must be explicit in the assurance case and should include operational mitigations predicated on people and processes.Assurance cases could be used to gain regulatory approval of conventional and new technology. They can also serve to demonstrate consistency with a desired level of safety, especially in SoSs whose existing standards may not be adequate. This paper also provides guidelines for preserving alignment between component assurance cases along a product supply chain, and the respective SoSs that they support. It shows how assurance is a continuous process that spans product evolution through the monitoring of interdependent requirements and SPI. The interdependency necessary for a successful assurance case encourages stakeholders to identify and formally accept critical interconnections between related organizations. The resulting coordination promotes accountability for safety through increased awareness and the cultivation of a positive safety culture.
ISSN: 2155-7209
Anastasakis, Zacharias, Psychogyios, Konstantinos, Velivassaki, Terpsi, Bourou, Stavroula, Voulkidis, Artemis, Skias, Dimitrios, Gonos, Antonis, Zahariadis, Theodore.  2022.  Enhancing Cyber Security in IoT Systems using FL-based IDS with Differential Privacy. 2022 Global Information Infrastructure and Networking Symposium (GIIS). :30—34.
Nowadays, IoT networks and devices exist in our everyday life, capturing and carrying unlimited data. However, increasing penetration of connected systems and devices implies rising threats for cybersecurity with IoT systems suffering from network attacks. Artificial Intelligence (AI) and Machine Learning take advantage of huge volumes of IoT network logs to enhance their cybersecurity in IoT. However, these data are often desired to remain private. Federated Learning (FL) provides a potential solution which enables collaborative training of attack detection model among a set of federated nodes, while preserving privacy as data remain local and are never disclosed or processed on central servers. While FL is resilient and resolves, up to a point, data governance and ownership issues, it does not guarantee security and privacy by design. Adversaries could interfere with the communication process, expose network vulnerabilities, and manipulate the training process, thus affecting the performance of the trained model. In this paper, we present a federated learning model which can successfully detect network attacks in IoT systems. Moreover, we evaluate its performance under various settings of differential privacy as a privacy preserving technique and configurations of the participating nodes. We prove that the proposed model protects the privacy without actually compromising performance. Our model realizes a limited performance impact of only ∼ 7% less testing accuracy compared to the baseline while simultaneously guaranteeing security and applicability.
Hai, Xuesong, Liu, Jing.  2022.  PPDS: Privacy Preserving Data Sharing for AI applications Based on Smart Contracts. 2022 IEEE 46th Annual Computers, Software, and Applications Conference (COMPSAC). :1561—1566.
With the development of artificial intelligence, the need for data sharing is becoming more and more urgent. However, the existing data sharing methods can no longer fully meet the data sharing needs. Privacy breaches, lack of motivation and mutual distrust have become obstacles to data sharing. We design a privacy-preserving, decentralized data sharing method based on blockchain smart contracts, named PPDS. To protect data privacy, we transform the data sharing problem into a model sharing problem. This means that the data owner does not need to directly share the raw data, but the AI model trained with such data. The data requester and the data owner interact on the blockchain through a smart contract. The data owner trains the model with local data according to the requester's requirements. To fairly assess model quality, we set up several model evaluators to assess the validity of the model through voting. After the model is verified, the data owner who trained the model will receive reward in return through a smart contract. The sharing of the model avoids direct exposure of the raw data, and the reasonable incentive provides a motivation for the data owner to share the data. We describe the design and workflow of our PPDS, and analyze the security using formal verification technology, that is, we use Coloured Petri Nets (CPN) to build a formal model for our approach, proving its security through simulation execution and model checking. Finally, we demonstrate effectiveness of PPDS by developing a prototype with its corresponding case application.
Alotaibi, Jamal, Alazzawi, Lubna.  2022.  PPIoV: A Privacy Preserving-Based Framework for IoV- Fog Environment Using Federated Learning and Blockchain. 2022 IEEE World AI IoT Congress (AIIoT). :597—603.
The integration of the Internet-of-Vehicles (IoV) and fog computing benefits from cooperative computing and analysis of environmental data while avoiding network congestion and latency. However, when private data is shared across fog nodes or the cloud, there exist privacy issues that limit the effectiveness of IoV systems, putting drivers' safety at risk. To address this problem, we propose a framework called PPIoV, which is based on Federated Learning (FL) and Blockchain technologies to preserve the privacy of vehicles in IoV.Typical machine learning methods are not well suited for distributed and highly dynamic systems like IoV since they train on data with local features. Therefore, we use FL to train the global model while preserving privacy. Also, our approach is built on a scheme that evaluates the reliability of vehicles participating in the FL training process. Moreover, PPIoV is built on blockchain to establish trust across multiple communication nodes. For example, when the local learned model updates from the vehicles and fog nodes are communicated with the cloud to update the global learned model, all transactions take place on the blockchain. The outcome of our experimental study shows that the proposed method improves the global model's accuracy as a result of allowing reputed vehicles to update the global model.
Jagadeesha, Nishchal.  2022.  Facial Privacy Preservation using FGSM and Universal Perturbation attacks. 2022 International Conference on Machine Learning, Big Data, Cloud and Parallel Computing (COM-IT-CON). 1:46—52.
Research done in Facial Privacy so far has entrenched the scope of gleaning race, age, and gender from a human’s facial image that are classifiable and compliant biometric attributes. Noticeable distortions, morphing, and face-swapping are some of the techniques that have been researched to restore consumers’ privacy. By fooling face recognition models, these techniques cater superficially to the needs of user privacy, however, the presence of visible manipulations negatively affects the aesthetic of the image. The objective of this work is to highlight common adversarial techniques that can be used to introduce granular pixel distortions using white-box and black-box perturbation algorithms that ensure the privacy of users’ sensitive or personal data in face images, fooling AI facial recognition models while maintaining the aesthetics of and visual integrity of the image.
Yang, Xuefeng, Liu, Li, Zhang, Yinggang, Li, Yihao, Liu, Pan, Ai, Shili.  2022.  A Privacy-preserving Approach to Distributed Set-membership Estimation over Wireless Sensor Networks. 2022 9th International Conference on Dependable Systems and Their Applications (DSA). :974—979.
This paper focuses on the system on wireless sensor networks. The system is linear and the time of the system is discrete as well as variable, which named discrete-time linear time-varying systems (DLTVS). DLTVS are vulnerable to network attacks when exchanging information between sensors in the network, as well as putting their security at risk. A DLTVS with privacy-preserving is designed for this purpose. A set-membership estimator is designed by adding privacy noise obeying the Laplace distribution to state at the initial moment. Simultaneously, the differential privacy of the system is analyzed. On this basis, the real state of the system and the existence form of the estimator for the desired distribution are analyzed. Finally, simulation examples are given, which prove that the model after adding differential privacy can obtain accurate estimates and ensure the security of the system state.
Salama, Ramiz, Al-Turjman, Fadi.  2022.  AI in Blockchain Towards Realizing Cyber Security. 2022 International Conference on Artificial Intelligence in Everything (AIE). :471—475.
Blockchain and artificial intelligence are two technologies that, when combined, have the ability to help each other realize their full potential. Blockchains can guarantee the accessibility and consistent admittance to integrity safeguarded big data indexes from numerous areas, allowing AI systems to learn more effectively and thoroughly. Similarly, artificial intelligence (AI) can be used to offer new consensus processes, and hence new methods of engaging with Blockchains. When it comes to sensitive data, such as corporate, healthcare, and financial data, various security and privacy problems arise that must be properly evaluated. Interaction with Blockchains is vulnerable to data credibility checks, transactional data leakages, data protection rules compliance, on-chain data privacy, and malicious smart contracts. To solve these issues, new security and privacy-preserving technologies are being developed. AI-based blockchain data processing, either based on AI or used to defend AI-based blockchain data processing, is emerging to simplify the integration of these two cutting-edge technologies.
S, Harichandana B S, Agarwal, Vibhav, Ghosh, Sourav, Ramena, Gopi, Kumar, Sumit, Raja, Barath Raj Kandur.  2022.  PrivPAS: A real time Privacy-Preserving AI System and applied ethics. 2022 IEEE 16th International Conference on Semantic Computing (ICSC). :9—16.
With 3.78 billion social media users worldwide in 2021 (48% of the human population), almost 3 billion images are shared daily. At the same time, a consistent evolution of smartphone cameras has led to a photography explosion with 85% of all new pictures being captured using smartphones. However, lately, there has been an increased discussion of privacy concerns when a person being photographed is unaware of the picture being taken or has reservations about the same being shared. These privacy violations are amplified for people with disabilities, who may find it challenging to raise dissent even if they are aware. Such unauthorized image captures may also be misused to gain sympathy by third-party organizations, leading to a privacy breach. Privacy for people with disabilities has so far received comparatively less attention from the AI community. This motivates us to work towards a solution to generate privacy-conscious cues for raising awareness in smartphone users of any sensitivity in their viewfinder content. To this end, we introduce PrivPAS (A real time Privacy-Preserving AI System) a novel framework to identify sensitive content. Additionally, we curate and annotate a dataset to identify and localize accessibility markers and classify whether an image is sensitive to a featured subject with a disability. We demonstrate that the proposed lightweight architecture, with a memory footprint of a mere 8.49MB, achieves a high mAP of 89.52% on resource-constrained devices. Furthermore, our pipeline, trained on face anonymized data. achieves an F1-score of 73.1%.
Banciu, Doina, Cîrnu, Carmen Elena.  2022.  AI Ethics and Data Privacy compliance. 2022 14th International Conference on Electronics, Computers and Artificial Intelligence (ECAI). :1—5.
Throughout history, technological evolution has generated less desired side effects with impact on society. In the field of IT&C, there are ongoing discussions about the role of robots within economy, but also about their impact on the labour market. In the case of digital media systems, we talk about misinformation, manipulation, fake news, etc. Issues related to the protection of the citizen's life in the face of technology began more than 25 years ago; In addition to the many messages such as “the citizen is at the center of concern” or, “privacy must be respected”, transmitted through various channels of different entities or companies in the field of ICT, the EU has promoted a number of legislative and normative documents to protect citizens' rights and freedoms.
Ham, MyungJoo, Woo, Sangjung, Jung, Jaeyun, Song, Wook, Jang, Gichan, Ahn, Yongjoo, Ahn, Hyoungjoo.  2022.  Toward Among-Device AI from On-Device AI with Stream Pipelines. 2022 IEEE/ACM 44th International Conference on Software Engineering: Software Engineering in Practice (ICSE-SEIP). :285—294.
Modern consumer electronic devices often provide intelligence services with deep neural networks. We have started migrating the computing locations of intelligence services from cloud servers (traditional AI systems) to the corresponding devices (on-device AI systems). On-device AI systems generally have the advantages of preserving privacy, removing network latency, and saving cloud costs. With the emergence of on-device AI systems having relatively low computing power, the inconsistent and varying hardware resources and capabilities pose difficulties. Authors' affiliation has started applying a stream pipeline framework, NNStreamer, for on-device AI systems, saving developmental costs and hardware resources and improving performance. We want to expand the types of devices and applications with on-device AI services products of both the affiliation and second/third parties. We also want to make each AI service atomic, re-deployable, and shared among connected devices of arbitrary vendors; we now have yet another requirement introduced as it always has been. The new requirement of “among-device AI” includes connectivity between AI pipelines so that they may share computing resources and hardware capabilities across a wide range of devices regardless of vendors and manufacturers. We propose extensions of the stream pipeline framework, NNStreamer, for on-device AI so that NNStreamer may provide among-device AI capability. This work is a Linux Foundation (LF AI & Data) open source project accepting contributions from the general public.
Abbasi, Wisam, Mori, Paolo, Saracino, Andrea, Frascolla, Valerio.  2022.  Privacy vs Accuracy Trade-Off in Privacy Aware Face Recognition in Smart Systems. 2022 IEEE Symposium on Computers and Communications (ISCC). :1—8.
This paper proposes a novel approach for privacy preserving face recognition aimed to formally define a trade-off optimization criterion between data privacy and algorithm accuracy. In our methodology, real world face images are anonymized with Gaussian blurring for privacy preservation. The anonymized images are processed for face detection, face alignment, face representation, and face verification. The proposed methodology has been validated with a set of experiments on a well known dataset and three face recognition classifiers. The results demonstrate the effectiveness of our approach to correctly verify face images with different levels of privacy and results accuracy, and to maximize privacy with the least negative impact on face detection and face verification accuracy.
Golatkar, Aditya, Achille, Alessandro, Wang, Yu-Xiang, Roth, Aaron, Kearns, Michael, Soatto, Stefano.  2022.  Mixed Differential Privacy in Computer Vision. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). :8366—8376.
We introduce AdaMix, an adaptive differentially private algorithm for training deep neural network classifiers using both private and public image data. While pre-training language models on large public datasets has enabled strong differential privacy (DP) guarantees with minor loss of accuracy, a similar practice yields punishing trade-offs in vision tasks. A few-shot or even zero-shot learning baseline that ignores private data can outperform fine-tuning on a large private dataset. AdaMix incorporates few-shot training, or cross-modal zero-shot learning, on public data prior to private fine-tuning, to improve the trade-off. AdaMix reduces the error increase from the non-private upper bound from the 167–311% of the baseline, on average across 6 datasets, to 68-92% depending on the desired privacy level selected by the user. AdaMix tackles the trade-off arising in visual classification, whereby the most privacy sensitive data, corresponding to isolated points in representation space, are also critical for high classification accuracy. In addition, AdaMix comes with strong theoretical privacy guarantees and convergence analysis.
Nisansala, Sewwandi, Chandrasiri, Gayal Laksara, Prasadika, Sonali, Jayasinghe, Upul.  2022.  Microservice Based Edge Computing Architecture for Internet of Things. 2022 2nd International Conference on Advanced Research in Computing (ICARC). :332—337.
Distributed computation and AI processing at the edge has been identified as an efficient solution to deliver real-time IoT services and applications compared to cloud-based paradigms. These solutions are expected to support the delay-sensitive IoT applications, autonomic decision making, and smart service creation at the edge in comparison to traditional IoT solutions. However, existing solutions have limitations concerning distributed and simultaneous resource management for AI computation and data processing at the edge; concurrent and real-time application execution; and platform-independent deployment. Hence, first, we propose a novel three-layer architecture that facilitates the above service requirements. Then we have developed a novel platform and relevant modules with integrated AI processing and edge computer paradigms considering issues related to scalability, heterogeneity, security, and interoperability of IoT services. Further, each component is designed to handle the control signals, data flows, microservice orchestration, and resource composition to match with the IoT application requirements. Finally, the effectiveness of the proposed platform is tested and have been verified.
Chu, Mingde, Song, Yufei.  2021.  Analysis of network security and privacy security based on AI in IOT environment. 2021 IEEE 4th International Conference on Information Systems and Computer Aided Education (ICISCAE). :390–393.
With the development of information technology, the Internet of things (IOT) has gradually become the third wave of global information industry revolution after computer and Internet. Artificial intelligence (AI) and IOT technology is an important prerequisite for the rapid development of the current information society. However, while AI and IOT technologies bring convenient and intelligent services to people, they also have many defects and imperfect development. Therefore, it is necessary to pay more attention to the development of AI and IOT technologies, actively improve the application system, and create a network security management system for AI and IOT applications that can timely detect intrusion, assess risk and prevent viruses. In this paper, the network security risks caused by AI and IOT applications are analyzed. Therefore, in order to ensure the security of IOT environment, network security and privacy security have become the primary problems to be solved, and management should be strengthened from technical to legal aspects.
Alotaiby, Turky N., Alshebeili, Saleh A., Alotibi, Gaseb.  2021.  Subject Authentication using Time-Frequency Image Textural Features. 2021 International Conference on Artificial Intelligence in Information and Communication (ICAIIC). :130—133.
The growing internet-based services such as banking and shopping have brought both ease to human's lives and challenges in user identity authentication. Different methods have been investigated for user authentication such as retina, finger print, and face recognition. This study introduces a photoplethysmogram (PPG) based user identity authentication relying on textural features extracted from time-frequency image. The PPG signal is segmented into segments and each segment is transformed into time-frequency domain using continuous wavelet transform (CWT). Then, the textural features are extracted from the time-frequency images using Haralick's method. Finally, a classifier is employed for identity authentication purposes. The proposed system achieved an average accuracy of 99.14% and 99.9% with segment lengths of one and tweeny seconds, respectively, using random forest classifier.
Sooraksa, Nanta.  2021.  A Survey of using Computational Intelligence (CI) and Artificial Intelligence (AI) in Human Resource (HR) Analytics. 2021 7th International Conference on Engineering, Applied Sciences and Technology (ICEAST). :129—132.
Human Resource (HR) Analytics has been increasingly attracted attention for a past decade. This is because the study field is adopted data-driven approaches to be processed and interpreted for meaningful insights in human resources. The field is involved in HR decision making helping to understand why people, organization, or other business performance behaved the way they do. Embracing the available tools for decision making and learning in the field of computational intelligence (CI) and Artificial Intelligence (AI) to the field of HR, this creates tremendous opportunities for HR Analytics in practical aspects. However, there are still inadequate applications in this area. This paper serves as a survey of using the tools and their applications in HR involving recruitment, retention, reward and retirement. An example of using CI and AI for career development and training in the era of disruption is conceptually proposed.
Dijk, Allard.  2021.  Detection of Advanced Persistent Threats using Artificial Intelligence for Deep Packet Inspection. 2021 IEEE International Conference on Big Data (Big Data). :2092–2097.

Advanced persistent threats (APT’s) are stealthy threat actors with the skills to gain covert control of the computer network for an extended period of time. They are the highest cyber attack risk factor for large companies and states. A successful attack via an APT can cost millions of dollars, can disrupt civil life and has the capabilities to do physical damage. APT groups are typically state-sponsored and are considered the most effective and skilled cyber attackers. Attacks of APT’s are executed in several stages as pointed out in the Lockheed Martin cyber kill chain (CKC). Each of these APT stages can potentially be identified as patterns in network traffic. Using the "APT-2020" dataset, that compiles the characteristics and stages of an APT, we carried out experiments on the detection of anomalous traffic for all APT stages. We compare several artificial intelligence models, like a stacked auto encoder, a recurrent neural network and a one class state vector machine and show significant improvements on detection in the data exfiltration stage. This dataset is the first to have a data exfiltration stage included to experiment on. According to APT-2020’s authors current models have the biggest challenge specific to this stage. We introduce a method to successfully detect data exfiltration by analyzing the payload of the network traffic flow. This flow based deep packet inspection approach improves detection compared to other state of the art methods.

Aldossary, Lina Abdulaziz, Ali, Mazen, Alasaadi, Abdulla.  2021.  Securing SCADA Systems against Cyber-Attacks using Artificial Intelligence. 2021 International Conference on Innovation and Intelligence for Informatics, Computing, and Technologies (3ICT). :739—745.
Monitoring and managing electric power generation, distribution and transmission requires supervisory control and data acquisition (SCADA) systems. As technology has developed, these systems have become huge, complicated, and distributed, which makes them susceptible to new risks. In particular, the lack of security in SCADA systems make them a target for network attacks such as denial of service (DoS) and developing solutions for this issue is the main objective of this thesis. By reviewing various existing system solutions for securing SCADA systems, a new security approach is recommended that employs Artificial Intelligence(AI). AI is an innovative approach that imparts learning ability to software. Here deep learning algorithms and machine learning algorithms are used to develop an intrusion detection system (IDS) to combat cyber-attacks. Various methods and algorithms are evaluated to obtain the best results in intrusion detection. The results reveal the Bi-LSTM IDS technique provides the highest intrusion detection (ID) performance compared with previous techniques to secure SCADA systems
Catak, Evren, Catak, Ferhat Ozgur, Moldsvor, Arild.  2021.  Adversarial Machine Learning Security Problems for 6G: mmWave Beam Prediction Use-Case. 2021 IEEE International Black Sea Conference on Communications and Networking (BlackSeaCom). :1–6.
6G is the next generation for the communication systems. In recent years, machine learning algorithms have been applied widely in various fields such as health, transportation, and the autonomous car. The predictive algorithms will be used in 6G problems. With the rapid developments of deep learning techniques, it is critical to take the security concern into account when applying the algorithms. While machine learning offers significant advantages for 6G, AI models’ security is normally ignored. Due to the many applications in the real world, security is a vital part of the algorithms. This paper proposes a mitigation method for adversarial attacks against proposed 6G machine learning models for the millimeter-wave (mmWave) beam prediction using adversarial learning. The main idea behind adversarial attacks against machine learning models is to produce faulty results by manipulating trained deep learning models for 6G applications for mmWave beam prediction. We also present the adversarial learning mitigation method’s performance for 6G security in millimeter-wave beam prediction application with fast gradient sign method attack. The mean square errors of the defended model under attack are very close to the undefended model without attack.
Wang, Xiaoyu, Han, Zhongshou, Yu, Rui.  2021.  Security Situation Prediction Method of Industrial Control Network Based on Ant Colony-RBF Neural Network. 2021 IEEE 2nd International Conference on Big Data, Artificial Intelligence and Internet of Things Engineering (ICBAIE). :834–837.
To understand the future trend of network security, the field of network security began to introduce the concept of NSSA(Network Security Situation Awareness). This paper implements the situation assessment model by using game theory algorithms to calculate the situation value of attack and defense behavior. After analyzing the ant colony algorithm and the RBF neural network, the defects of the RBF neural network are improved through the advantages of the ant colony algorithm, and the situation prediction model based on the ant colony-RBF neural network is realized. Finally, the model was verified experimentally.
Ferraro, Angelo.  2020.  When AI Gossips. 2020 IEEE International Symposium on Technology and Society (ISTAS). :69–71.
The concept of AI Gossip is presented. It is analogous to the traditional understanding of a pernicious human failing. It is made more egregious by the technology of AI, internet, current privacy policies, and practices. The recognition by the technological community of its complacency is critical to realizing its damaging influence on human rights. A current example from the medical field is provided to facilitate the discussion and illustrate the seriousness of AI Gossip. Further study and model development is encouraged to support and facilitate the need to develop standards to address the implications and consequences to human rights and dignity.
Lee, Dongseop, Kim, Hyunjin, Ryou, Jaecheol.  2020.  Poisoning Attack on Show and Tell Model and Defense Using Autoencoder in Electric Factory. 2020 IEEE International Conference on Big Data and Smart Computing (BigComp). :538–541.
Recently, deep neural network technology has been developed and used in various fields. The image recognition model can be used for automatic safety checks at the electric factory. However, as the deep neural network develops, the importance of security increases. A poisoning attack is one of security problems. It is an attack that breaks down by entering malicious data into the training data set of the model. This paper generates adversarial data that modulates feature values to different targets by manipulating less RGB values. Then, poisoning attacks in one of the image recognition models, the show and tell model. Then use autoencoder to defend adversarial data.
Laato, Samuli, Farooq, Ali, Tenhunen, Henri, Pitkamaki, Tinja, Hakkala, Antti, Airola, Antti.  2020.  AI in Cybersecurity Education- A Systematic Literature Review of Studies on Cybersecurity MOOCs. 2020 IEEE 20th International Conference on Advanced Learning Technologies (ICALT). :6—10.

Machine learning (ML) techniques are changing both the offensive and defensive aspects of cybersecurity. The implications are especially strong for privacy, as ML approaches provide unprecedented opportunities to make use of collected data. Thus, education on cybersecurity and AI is needed. To investigate how AI and cybersecurity should be taught together, we look at previous studies on cybersecurity MOOCs by conducting a systematic literature review. The initial search resulted in 72 items and after screening for only peer-reviewed publications on cybersecurity online courses, 15 studies remained. Three of the studies concerned multiple cybersecurity MOOCs whereas 12 focused on individual courses. The number of published work evaluating specific cybersecurity MOOCs was found to be small compared to all available cybersecurity MOOCs. Analysis of the studies revealed that cybersecurity education is, in almost all cases, organised based on the topic instead of used tools, making it difficult for learners to find focused information on AI applications in cybersecurity. Furthermore, there is a gab in academic literature on how AI applications in cybersecurity should be taught in online courses.