Visible to the public Biblio

Found 12218 results

2021-05-13
Mahmoud, Loreen, Praveen, Raja.  2020.  Artificial Neural Networks for detecting Intrusions: A survey. 2020 Fifth International Conference on Research in Computational Intelligence and Communication Networks (ICRCICN). :41–48.
Nowadays, the networks attacks became very sophisticated and hard to be recognized, The traditional types of intrusion detection systems became inefficient in predicting new types of attacks. As the IDS is an important factor in securing the network in the real time, many new effective IDS approaches have been proposed. In this paper, we intend to discuss different Artificial Neural Networks based IDS approaches, also we are going to categorize them in four categories (normal ANN, DNN, CNN, RNN) and make a comparison between them depending on different performance parameters (accuracy, FNR, FPR, training time, epochs and the learning rate) and other factors like the network structure, the classification type, the used dataset. At the end of the survey, we will mention the merits and demerits of each approach and suggest some enhancements to avoid the noticed drawbacks.
Wang, Xiaoyu, Gao, Yuanyuan, Zhang, Guangna, Guo, Mingxi.  2020.  Prediction of Optimal Power Allocation for Enhancing Security-Reliability Tradeoff with the Application of Artificial Neural Networks. 2020 2nd International Conference on Advances in Computer Technology, Information Science and Communications (CTISC). :40–45.
In this paper, we propose a power allocation scheme in order to improve both secure and reliable performance in the wireless two-hop threshold-selection decode-and-forward (DF) relaying networks, which is so crucial to set a threshold value related the signal-to-noise ratio (SNR) of the source signal at relay nodes for perfect decoding. We adapt the maximal-ratio combining (MRC) receiving SNR from the direct and relaying paths both at the destination and at the eavesdropper. Particularly worth mentioning is that the closed expression form of outage probability and intercept probability is driven, which can quantify the security and reliability, respectively. We also make endeavors to utilize a metric to tradeoff the security and the reliability (SRT) and find out the relevance between them in the balanced case. But beyond that, in the pursuit of tradeoff performance, power allocation tends to depend on the threshold value. In other words, it provides a new method optimizing total power to the source and the relay by the threshold value. The results are obtained from analysis, confirmed by simulation, and predicted by artificial neural networks (ANNs), which is trained with back propagation (BP) algorithm, and thus the feasibility of the proposed method is verified.
Everson, Douglas, Cheng, Long.  2020.  Network Attack Surface Simplification for Red and Blue Teams. 2020 IEEE Secure Development (SecDev). :74–80.
Network port scans are a key first step to developing a true understanding of a network-facing attack surface. However in large-scale networks, the data resulting from such scans can be too numerous for Red Teams to process for manual and semiautomatic testing. Indiscriminate port scans can also compromise a Red Team seeking to quickly gain a foothold on a network. A large attack surface can even complicate Blue Team activities like threat hunting. In this paper we provide a cluster analysis methodology designed to group similar hosts to reduce security team workload and Red Team observability. We also measure the Internet-facing network attack surface of 13 organizations by clustering their hosts based on similarity. Through a case study we demonstrate how the output of our clustering technique provides new insight to both Red and Blue Teams, allowing them to quickly identify potential high-interest points on the attack surface.
Liu, Xinlin, Huang, Jianhua, Luo, Weifeng, Chen, Qingming, Ye, Peishan, Wang, Dingbo.  2020.  Research on Attack Mechanism using Attack Surface. 2020 IEEE International Conference on Artificial Intelligence and Computer Applications (ICAICA). :137–141.
A approach to research on the attack mechanism designs through attack surface technology due to the complexity of the attack mechanism. The attack mechanism of a mimic architecture is analyzed in a relative way using attack surface metrics to indicate whether mimic architectures are safer than non-mimic architectures. The definition of the architectures attack surface in terms of the mimic brackets along three abstract dimensions referenced the system attack surface. The larger the attack surface, the more likely the architecture will be attacked.
Lit, Yanyan, Kim, Sara, Sy, Eric.  2021.  A Survey on Amazon Alexa Attack Surfaces. 2021 IEEE 18th Annual Consumer Communications Networking Conference (CCNC). :1–7.
Since being launched in 2014, Alexa, Amazon's versatile cloud-based voice service, is now active in over 100 million households worldwide [1]. Alexa's user-friendly, personalized vocal experience offers customers a more natural way of interacting with cutting-edge technology by allowing the ability to directly dictate commands to the assistant. Now in the present year, the Alexa service is more accessible than ever, available on hundreds of millions of devices from not only Amazon but third-party device manufacturers. Unfortunately, that success has also been the source of concern and controversy. The success of Alexa is based on its effortless usability, but in turn, that has led to a lack of sufficient security. This paper surveys various attacks against Amazon Alexa ecosystem including attacks against the frontend voice capturing and the cloud backend voice command recognition and processing. Overall, we have identified six attack surfaces covering the lifecycle of Alexa voice interaction that spans several stages including voice data collection, transmission, processing and storage. We also discuss the potential mitigation solutions for each attack surface to better improve Alexa or other voice assistants in terms of security and privacy.
Nie, Guanglai, Zhang, Zheng, Zhao, Yufeng.  2020.  The Executors Scheduling Algorithm for the Web Server Based on the Attack Surface. 2020 IEEE International Conference on Advances in Electrical Engineering and Computer Applications( AEECA). :281–287.
In the existing scheduling algorithms of mimicry structure, the random algorithm cannot solve the problem of large vulnerability window in the process of random scheduling. Based on known vulnerabilities, the algorithm with diversity and complexity as scheduling indicators can not only fail to meet the characteristic requirements of mimic's endogenous security for defense, but also cannot analyze the unknown vulnerabilities and measure the continuous differences in time of mimic Executive Entity. In this paper, from the Angle of attack surface is put forward based on mimicry attack the mimic Executive Entity scheduling algorithm, its resources to measure analysis method and mimic security has intrinsic consistency, avoids the random algorithm to vulnerability and modeling using known vulnerabilities targeted, on time at the same time can ensure the diversity of the Executive body, to mimic the attack surface web server scheduling system in continuous time is less, and able to form a continuous differences. Experiments show that the minimum symbiotic resource scheduling algorithm based on time continuity is more secure than the random scheduling algorithm.
Zhang, Yaqin, Ma, Duohe, Sun, Xiaoyan, Chen, Kai, Liu, Feng.  2020.  WGT: Thwarting Web Attacks Through Web Gene Tree-based Moving Target Defense. 2020 IEEE International Conference on Web Services (ICWS). :364–371.
Moving target defense (MTD) suggests a game-changing way of enhancing web security by increasing uncertainty and complexity for attackers. A good number of web MTD techniques have been investigated to counter various types of web attacks. However, in most MTD techniques, only fixed attributes of the attack surface are shifted, leaving the rest exploitable by the attackers. Currently, there are few mechanisms to support the whole attack surface movement and solve the partial coverage problem, where only a fraction of the possible attributes shift in the whole attack surface. To address this issue, this paper proposes a Web Gene Tree (WGT) based MTD mechanism. The key point is to extract all potential exploitable key attributes related to vulnerabilities as web genes, and mutate them using various MTD techniques to withstand various attacks. Experimental results indicate that, by randomly shifting web genes and diversely inserting deceptive ones, the proposed WGT mechanism outperforms other existing schemes and can significantly improve the security of web applications.
Liu, Xinghua, Bai, Dandan, Jiang, Rui.  2020.  Load Frequency Control of Multi-area Power Systems under Deception Attacks*. 2020 Chinese Automation Congress (CAC). :3851–3856.
This paper investigated the sliding mode load frequency control (LFC) for an multi-area power system (MPS) under deception attacks (DA). A Luenberger observer is designed to obtain the state estimate of MPS. By using the Lyapunov-Krasovskii method, a sliding mode surface (SMS) is designed to ensure the stability. Then the accessibility analysis ensures that the trajectory of the MPS can reach the specified SMS. Finally, the serviceability of the method is explained by providing a case study.
Luo, Yukui, Gongye, Cheng, Ren, Shaolei, Fei, Yunsi, Xu, Xiaolin.  2020.  Stealthy-Shutdown: Practical Remote Power Attacks in Multi - Tenant FPGAs. 2020 IEEE 38th International Conference on Computer Design (ICCD). :545–552.
With the deployment of artificial intelligent (AI) algorithms in a large variety of applications, there creates an increasing need for high-performance computing capabilities. As a result, different hardware platforms have been utilized for acceleration purposes. Among these hardware-based accelerators, the field-programmable gate arrays (FPGAs) have gained a lot of attention due to their re-programmable characteristics, which provide customized control logic and computing operators. For example, FPGAs have recently been adopted for on-demand cloud services by the leading cloud providers like Amazon and Microsoft, providing acceleration for various compute-intensive tasks. While the co-residency of multiple tenants on a cloud FPGA chip increases the efficiency of resource utilization, it also creates unique attack surfaces that are under-explored. In this paper, we exploit the vulnerability associated with the shared power distribution network on cloud FPGAs. We present a stealthy power attack that can be remotely launched by a malicious tenant, shutting down the entire chip and resulting in denial-of-service for other co-located benign tenants. Specifically, we propose stealthy-shutdown: a well-timed power attack that can be implemented in two steps: (1) an attacker monitors the realtime FPGA power-consumption detected by ring-oscillator-based voltage sensors, and (2) when capturing high power-consuming moments, i.e., the power consumption by other tenants is above a certain threshold, she/he injects a well-timed power load to shut down the FPGA system. Note that in the proposed attack strategy, the power load injected by the attacker only accounts for a small portion of the overall power consumption; therefore, such attack strategy remains stealthy to the cloud FPGA operator. We successfully implement and validate the proposed attack on three FPGA evaluation kits with running real-world applications. The proposed attack results in a stealthy-shutdown, demonstrating severe security concerns of co-tenancy on cloud FPGAs. We also offer two countermeasures that can mitigate such power attacks.
Bradbury, Matthew, Maple, Carsten, Yuan, Hu, Atmaca, Ugur Ilker, Cannizzaro, Sara.  2020.  Identifying Attack Surfaces in the Evolving Space Industry Using Reference Architectures. 2020 IEEE Aerospace Conference. :1–20.
The space environment is currently undergoing a substantial change and many new entrants to the market are deploying devices, satellites and systems in space; this evolution has been termed as NewSpace. The change is complicated by technological developments such as deploying machine learning based autonomous space systems and the Internet of Space Things (IoST). In the IoST, space systems will rely on satellite-to-x communication and interactions with wider aspects of the ground segment to a greater degree than existing systems. Such developments will inevitably lead to a change in the cyber security threat landscape of space systems. Inevitably, there will be a greater number of attack vectors for adversaries to exploit, and previously infeasible threats can be realised, and thus require mitigation. In this paper, we present a reference architecture (RA) that can be used to abstractly model in situ applications of this new space landscape. The RA specifies high-level system components and their interactions. By instantiating the RA for two scenarios we demonstrate how to analyse the attack surface using attack trees.
Niu, Yingjiao, Lei, Lingguang, Wang, Yuewu, Chang, Jiang, Jia, Shijie, Kou, Chunjing.  2020.  SASAK: Shrinking the Attack Surface for Android Kernel with Stricter “seccomp” Restrictions. 2020 16th International Conference on Mobility, Sensing and Networking (MSN). :387–394.
The increasing vulnerabilities in Android kernel make it an attractive target to the attackers. Most kernel-targeted attacks are initiated through system calls. For security purpose, Google has introduced a Linux kernel security mechanism named “seccomp” since Android O to constrain the system calls accessible to the Android apps. Unfortunately, existing Android seccomp mechanism provides a fairly coarse-grained restriction by enforcing a unified seccomp policy containing more than 250 system calls for Android apps, which greatly reduces the effectiveness of seccomp. Also, it lacks an approach to profile the unnecessary system calls for a given Android app. In this paper we present a two-level control scheme named SASAK, which can shrink the attack surface of Android kernel by strictly constraining the system calls available to the Android apps with seccomp mechanism. First, instead of leveraging a unified seccomp policy for all Android apps, SASAK introduces an architecture- dedicated system call constraining by enforcing two separate and refined seccomp policies for the 32-bit Android apps and 64-bit Android apps, respectively. Second, we provide a tool to profile the necessary system calls for a given Android app and enforce an app-dedicated seccomp policy to further reduce the allowed system calls for the apps selected by the users. The app-dedicated control could dynamically change the seccomp policy for an app according to its actual needs. We implement a prototype of SASAK and the experiment results show that the architecture-dedicated constraining reduces 39.6% system calls for the 64-bit apps and 42.5% system calls for the 32-bit apps. 33% of the removed system calls for the 64-bit apps are vulnerable, and the number for the 32-bit apps is 18.8%. The app-dedicated restriction reduces about 66.9% and 62.5% system calls on average for the 64-bit apps and 32-bit apps, respectively. In addition, SASAK introduces negligible performance overhead.
Plappert, Christian, Zelle, Daniel, Gadacz, Henry, Rieke, Roland, Scheuermann, Dirk, Krauß, Christoph.  2021.  Attack Surface Assessment for Cybersecurity Engineering in the Automotive Domain. 2021 29th Euromicro International Conference on Parallel, Distributed and Network-Based Processing (PDP). :266–275.
Connected smart cars enable new attacks that may have serious consequences. Thus, the development of new cars must follow a cybersecurity engineering process as defined for example in ISO/SAE 21434. A central part of such a process is the threat and risk assessment including an attack feasibility rating. In this paper, we present an attack surface assessment with focus on the attack feasibility rating compliant to ISO/SAE 21434. We introduce a reference architecture with assets constituting the attack surface, the attack feasibility rating for these assets, and the application of this rating on typical use cases. The attack feasibility rating assigns attacks and assets to an evaluation of the attacker dimensions such as the required knowledge and the feasibility of attacks derived from it. Our application of sample use cases shows how this rating can be used to assess the feasibility of an entire attack path. The attack feasibility rating can be used as a building block in a threat and risk assessment according to ISO/SAE 21434.
S, Naveen, Puzis, Rami, Angappan, Kumaresan.  2020.  Deep Learning for Threat Actor Attribution from Threat Reports. 2020 4th International Conference on Computer, Communication and Signal Processing (ICCCSP). :1–6.
Threat Actor Attribution is the task of identifying an attacker responsible for an attack. This often requires expert analysis and involves a lot of time. There had been attempts to detect a threat actor using machine learning techniques that use information obtained from the analysis of malware samples. These techniques will only be able to identify the attack, and it is trivial to guess the attacker because various attackers may adopt an attack method. A state-of-the-art method performs attribution of threat actors from text reports using Machine Learning and NLP techniques using Threat Intelligence reports. We use the same set of Threat Reports of Advanced Persistent Threats (APT). In this paper, we propose a Deep Learning architecture to attribute Threat actors based on threat reports obtained from various Threat Intelligence sources. Our work uses Neural Networks to perform the task of attribution and show that our method makes the attribution more accurate than other techniques and state-of-the-art methods.
Kumar, Sachin, Gupta, Garima, Prasad, Ranjitha, Chatterjee, Arnab, Vig, Lovekesh, Shroff, Gautam.  2020.  CAMTA: Causal Attention Model for Multi-touch Attribution. 2020 International Conference on Data Mining Workshops (ICDMW). :79–86.
Advertising channels have evolved from conventional print media, billboards and radio-advertising to online digital advertising (ad), where the users are exposed to a sequence of ad campaigns via social networks, display ads, search etc. While advertisers revisit the design of ad campaigns to concurrently serve the requirements emerging out of new ad channels, it is also critical for advertisers to estimate the contribution from touch-points (view, clicks, converts) on different channels, based on the sequence of customer actions. This process of contribution measurement is often referred to as multi-touch attribution (MTA). In this work, we propose CAMTA, a novel deep recurrent neural network architecture which is a causal attribution mechanism for user-personalised MTA in the context of observational data. CAMTA minimizes the selection bias in channel assignment across time-steps and touchpoints. Furthermore, it utilizes the users' pre-conversion actions in a principled way in order to predict per-channel attribution. To quantitatively benchmark the proposed MTA model, we employ the real-world Criteo dataset and demonstrate the superior performance of CAMTA with respect to prediction accuracy as compared to several baselines. In addition, we provide results for budget allocation and user-behaviour modeling on the predicted channel attribution.
Xu, Shawn, Venugopalan, Subhashini, Sundararajan, Mukund.  2020.  Attribution in Scale and Space. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). :9677–9686.
We study the attribution problem for deep networks applied to perception tasks. For vision tasks, attribution techniques attribute the prediction of a network to the pixels of the input image. We propose a new technique called Blur Integrated Gradients (Blur IG). This technique has several advantages over other methods. First, it can tell at what scale a network recognizes an object. It produces scores in the scale/frequency dimension, that we find captures interesting phenomena. Second, it satisfies the scale-space axioms, which imply that it employs perturbations that are free of artifact. We therefore produce explanations that are cleaner and consistent with the operation of deep networks. Third, it eliminates the need for baseline parameter for Integrated Gradients for perception tasks. This is desirable because the choice of baseline has a significant effect on the explanations. We compare the proposed technique against previous techniques and demonstrate application on three tasks: ImageNet object recognition, Diabetic Retinopathy prediction, and AudioSet audio event identification. Code and examples are at https://github.com/PAIR-code/saliency.
Kayes, A.S.M., Hammoudeh, Mohammad, Badsha, Shahriar, Watters, Paul A., Ng, Alex, Mohammed, Fatma, Islam, Mofakharul.  2020.  Responsibility Attribution Against Data Breaches. 2020 IEEE International Conference on Informatics, IoT, and Enabling Technologies (ICIoT). :498–503.
Electronic crimes like data breaches in healthcare systems are often a fundamental failures of access control mechanisms. Most of current access control systems do not provide an accessible way to engage users in decision making processes, about who should have access to what data and when. We advocate that a policy ontology can contribute towards the development of an effective access control system by attributing responsibility for data breaches. We propose a responsibility attribution model as a theoretical construct and discuss its implication by introducing a cost model for data breach countermeasures. Then, a policy ontology is presented to realize the proposed responsibility and cost models. An experimental study on the performance of the proposed framework is conducted with respect to a more generic access control framework. The practicality of the proposed solution is demonstrated through a case study from the healthcare domain.
Song, Jie, Chen, Yixin, Ye, Jingwen, Wang, Xinchao, Shen, Chengchao, Mao, Feng, Song, Mingli.  2020.  DEPARA: Deep Attribution Graph for Deep Knowledge Transferability. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). :3921–3929.
Exploring the intrinsic interconnections between the knowledge encoded in PRe-trained Deep Neural Networks (PR-DNNs) of heterogeneous tasks sheds light on their mutual transferability, and consequently enables knowledge transfer from one task to another so as to reduce the training effort of the latter. In this paper, we propose the DEeP Attribution gRAph (DEPARA) to investigate the transferability of knowledge learned from PR-DNNs. In DEPARA, nodes correspond to the inputs and are represented by their vectorized attribution maps with regards to the outputs of the PR-DNN. Edges denote the relatedness between inputs and are measured by the similarity of their features extracted from the PR-DNN. The knowledge transferability of two PR-DNNs is measured by the similarity of their corresponding DEPARAs. We apply DEPARA to two important yet under-studied problems in transfer learning: pre-trained model selection and layer selection. Extensive experiments are conducted to demonstrate the effectiveness and superiority of the proposed method in solving both these problems. Code, data and models reproducing the results in this paper are available at https://github.com/zju-vipa/DEPARA.
Peck, Sarah Marie, Khan, Mohammad Maifi Hasan, Fahim, Md Abdullah Al, Coman, Emil N, Jensen, Theodore, Albayram, Yusuf.  2020.  Who Would Bob Blame? Factors in Blame Attribution in Cyberattacks Among the Non-Adopting Population in the Context of 2FA 2020 IEEE 44th Annual Computers, Software, and Applications Conference (COMPSAC). :778–789.
This study focuses on identifying the factors contributing to a sense of personal responsibility that could improve understanding of insecure cybersecurity behavior and guide research toward more effective messaging targeting non-adopting populations. Towards that, we ran a 2(account type) x2(usage scenario) x2(message type) between-group study with 237 United States adult participants on Amazon MTurk, and investigated how the non-adopting population allocates blame, and under what circumstances they blame the end user among the parties who hold responsibility: the software companies holding data, the attackers exposing data, and others. We find users primarily hold service providers accountable for breaches but they feel the same companies should not enforce stronger security policies on users. Results indicate that people do hold end users accountable for their behavior in the event of a breach, especially when the users' behavior affects others. Implications of our findings in risk communication is discussed in the paper.
Bansal, Naman, Agarwal, Chirag, Nguyen, Anh.  2020.  SAM: The Sensitivity of Attribution Methods to Hyperparameters. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). :11–21.
Attribution methods can provide powerful insights into the reasons for a classifier's decision. We argue that a key desideratum of an explanation method is its robustness to input hyperparameters which are often randomly set or empirically tuned. High sensitivity to arbitrary hyperparameter choices does not only impede reproducibility but also questions the correctness of an explanation and impairs the trust of end-users. In this paper, we provide a thorough empirical study on the sensitivity of existing attribution methods. We found an alarming trend that many methods are highly sensitive to changes in their common hyperparameters e.g. even changing a random seed can yield a different explanation! Interestingly, such sensitivity is not reflected in the average explanation accuracy scores over the dataset as commonly reported in the literature. In addition, explanations generated for robust classifiers (i.e. which are trained to be invariant to pixel-wise perturbations) are surprisingly more robust than those generated for regular classifiers.
Fernandes, Steven, Raj, Sunny, Ewetz, Rickard, Pannu, Jodh Singh, Kumar Jha, Sumit, Ortiz, Eddy, Vintila, Iustina, Salter, Margaret.  2020.  Detecting Deepfake Videos using Attribution-Based Confidence Metric. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). :1250–1259.
Recent advances in generative adversarial networks have made detecting fake videos a challenging task. In this paper, we propose the application of the state-of-the-art attribution based confidence (ABC) metric for detecting deepfake videos. The ABC metric does not require access to the training data or training the calibration model on the validation data. The ABC metric can be used to draw inferences even when only the trained model is available. Here, we utilize the ABC metric to characterize whether a video is original or fake. The deep learning model is trained only on original videos. The ABC metric uses the trained model to generate confidence values. For, original videos, the confidence values are greater than 0.94.
Jaafar, Fehmi, Avellaneda, Florent, Alikacem, El-Hackemi.  2020.  Demystifying the Cyber Attribution: An Exploratory Study. 2020 IEEE Intl Conf on Dependable, Autonomic and Secure Computing, Intl Conf on Pervasive Intelligence and Computing, Intl Conf on Cloud and Big Data Computing, Intl Conf on Cyber Science and Technology Congress (DASC/PiCom/CBDCom/CyberSciTech). :35–40.
Current cyber attribution approaches proposed to use a variety of datasets and analytical techniques to distill the information that will be useful to identify cyber attackers. In contrast, practitioners and researchers in cyber attribution face several technical and regulation challenges. In this paper, we describe the main challenges of cyber attribution and present a state of the art of used approaches to face these challenges. Then, we will present an exploratory study to perform cyber attacks attribution based on pattern recognition from real data. In our study, we are using attack pattern discovery and identification based on real data collection and analysis.
Wu, Xiaohe, Calderon, Juan, Obeng, Morrison.  2021.  Attribution Based Approach for Adversarial Example Generation. SoutheastCon 2021. :1–6.
Neural networks with deep architectures have been used to construct state-of-the-art classifiers that can match human level accuracy in areas such as image classification. However, many of these classifiers can be fooled by examples slightly modified from their original forms. In this work, we propose a novel approach for generating adversarial examples that makes use of only attribution information of the features and perturbs only features that are highly influential to the output of the classifier. We call this approach Attribution Based Adversarial Generation (ABAG). To demonstrate the effectiveness of this approach, three somewhat arbitrary algorithms are proposed and examined. In the first algorithm all non-zero attributions are utilized and associated features perturbed; in the second algorithm only the top-n most positive and top-n most negative attributions are used and corresponding features perturbed; and in the third algorithm the level of perturbation is increased in an iterative manner until an adversarial example is discovered. All of the three algorithms are implemented and experiments are performed on the well-known MNIST dataset. Experiment results show that adversarial examples can be generated very efficiently, and thus prove the validity and efficacy of ABAG - utilizing attributions for the generation of adversarial examples. Furthermore, as shown by examples, ABAG can be adapted to provides a systematic searching approach to generate adversarial examples by perturbing a minimum amount of features.
Hu, Xiaoyi, Wang, Ke.  2020.  Bank Financial Innovation and Computer Information Security Management Based on Artificial Intelligence. 2020 2nd International Conference on Machine Learning, Big Data and Business Intelligence (MLBDBI). :572—575.
In recent years, with the continuous development of various new Internet technologies, big data, cloud computing and other technologies have been widely used in work and life. The further improvement of data scale and computing capability has promoted the breakthrough development of artificial intelligence technology. The generalization and classification of financial science and technology not only have a certain impact on the traditional financial business, but also put forward higher requirements for commercial banks to operate financial science and technology business. Artificial intelligence brings fresh experience to financial services and is conducive to increasing customer stickiness. Artificial intelligence technology helps the standardization, modeling and intelligence of banking business, and helps credit decision-making, risk early warning and supervision. This paper first discusses the influence of artificial intelligence on financial innovation, and on this basis puts forward measures for the innovation and development of bank financial science and technology. Finally, it discusses the problem of computer information security management in bank financial innovation in the era of artificial intelligence.
Shu, Fei, Chen, Shuting, Li, Feng, Zhang, JianYe, Chen, Jia.  2020.  Research and implementation of network attack and defense countermeasure technology based on artificial intelligence technology. 2020 IEEE 5th Information Technology and Mechatronics Engineering Conference (ITOEC). :475—478.
Using artificial intelligence technology to help network security has become a major trend. At present, major countries in the world have successively invested R & D force in the attack and defense of automatic network based on artificial intelligence. The U.S. Navy, the U.S. air force, and the DOD strategic capabilities office have invested heavily in the development of artificial intelligence network defense systems. DARPA launched the network security challenge (CGC) to promote the development of automatic attack system based on artificial intelligence. In the 2016 Defcon final, mayhem (the champion of CGC in 2014), an automatic attack team, participated in the competition with 14 human teams and once defeated two human teams, indicating that the automatic attack method generated by artificial intelligence system can scan system defects and find loopholes faster and more effectively than human beings. Japan's defense ministry also announced recently that in order to strengthen the ability to respond to network attacks, it will introduce artificial intelligence technology into the information communication network defense system of Japan's self defense force. It can be predicted that the deepening application of artificial intelligence in the field of network attack and defense may bring about revolutionary changes and increase the imbalance of the strategic strength of cyberspace in various countries. Therefore, it is necessary to systematically investigate the current situation of network attack and defense based on artificial intelligence at home and abroad, comprehensively analyze the development trend of relevant technologies at home and abroad, deeply analyze the development outline and specification of artificial intelligence attack and defense around the world, and refine the application status and future prospects of artificial intelligence attack and defense, so as to promote the development of artificial intelligence attack and Defense Technology in China and protect the core interests of cyberspace, of great significance
Ho, Tsung-Yu, Chen, Wei-An, Huang, Chiung-Ying.  2020.  The Burden of Artificial Intelligence on Internal Security Detection. 2020 IEEE 17th International Conference on Smart Communities: Improving Quality of Life Using ICT, IoT and AI (HONET). :148—150.
Our research team have devoted to extract internal malicious behavior by monitoring the network traffic for many years. We applied the deep learning approach to recognize the malicious patterns within network, but this methodology may lead to more works to examine the results from AI models production. Hence, this paper addressed the scenario to consider the burden of AI, and proposed an idea for long-term reliable detection in the future work.