Visible to the public Biblio

Found 139 results

Filters: Keyword is Predictive models  [Clear All Filters]
2021-10-12
Gouk, Henry, Hospedales, Timothy M..  2020.  Optimising Network Architectures for Provable Adversarial Robustness. 2020 Sensor Signal Processing for Defence Conference (SSPD). :1–5.
Existing Lipschitz-based provable defences to adversarial examples only cover the L2 threat model. We introduce the first bound that makes use of Lipschitz continuity to provide a more general guarantee for threat models based on any Lp norm. Additionally, a new strategy is proposed for designing network architectures that exhibit superior provable adversarial robustness over conventional convolutional neural networks. Experiments are conducted to validate our theoretical contributions, show that the assumptions made during the design of our novel architecture hold in practice, and quantify the empirical robustness of several Lipschitz-based adversarial defence methods.
Radhakrishnan, C., Karthick, K., Asokan, R..  2020.  Ensemble Learning Based Network Anomaly Detection Using Clustered Generalization of the Features. 2020 2nd International Conference on Advances in Computing, Communication Control and Networking (ICACCCN). :157–162.
Due to the extraordinary volume of business information, classy cyber-attacks pointing the networks of all enterprise have become more casual, with intruders trying to pierce vast into and grasp broader from the compromised network machines. The vital security essential is that field experts and the network administrators have a common terminology to share the attempt of intruders to invoke the system and to rapidly assist each other retort to all kind of threats. Given the enormous huge system traffic, traditional Machine Learning (ML) algorithms will provide ineffective predictions of the network anomaly. Thereby, a hybridized multi-model system can improve the accuracy of detecting the intrusion in the networks. In this manner, this article presents a novel approach Clustered Generalization oriented Ensemble Learning Model (CGELM) for predicting the network anomaly. The performance metrics of the anticipated approach are Detection Rate (DR) and False Predictive Rate (FPR) for the two heterogeneous data sets namely NSL-KDD and UGR'16. The proposed method provides 98.93% accuracy for DR and 0.14% of FPR against Decision Stump AdaBoost and Stacking Ensemble methods.
Zhao, Haojun, Lin, Yun, Gao, Song, Yu, Shui.  2020.  Evaluating and Improving Adversarial Attacks on DNN-Based Modulation Recognition. GLOBECOM 2020 - 2020 IEEE Global Communications Conference. :1–5.
The discovery of adversarial examples poses a serious risk to the deep neural networks (DNN). By adding a subtle perturbation that is imperceptible to the human eye, a well-behaved DNN model can be easily fooled and completely change the prediction categories of the input samples. However, research on adversarial attacks in the field of modulation recognition mainly focuses on increasing the prediction error of the classifier, while ignores the importance of decreasing the perceptual invisibility of attack. Aiming at the task of DNNbased modulation recognition, this study designs the Fitting Difference as a metric to measure the perturbed waveforms and proposes a new method: the Nesterov Adam Iterative Method to generate adversarial examples. We show that the proposed algorithm not only exerts excellent white-box attacks but also can initiate attacks on a black-box model. Moreover, our method decreases the waveform perceptual invisibility of attacks to a certain degree, thereby reducing the risk of an attack being detected.
Zhong, Zhenyu, Hu, Zhisheng, Chen, Xiaowei.  2020.  Quantifying DNN Model Robustness to the Real-World Threats. 2020 50th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN). :150–157.
DNN models have suffered from adversarial example attacks, which lead to inconsistent prediction results. As opposed to the gradient-based attack, which assumes white-box access to the model by the attacker, we focus on more realistic input perturbations from the real-world and their actual impact on the model robustness without any presence of the attackers. In this work, we promote a standardized framework to quantify the robustness against real-world threats. It is composed of a set of safety properties associated with common violations, a group of metrics to measure the minimal perturbation that causes the offense, and various criteria that reflect different aspects of the model robustness. By revealing comparison results through this framework among 13 pre-trained ImageNet classifiers, three state-of-the-art object detectors, and three cloud-based content moderators, we deliver the status quo of the real-world model robustness. Beyond that, we provide robustness benchmarking datasets for the community.
Deng, Perry, Linsky, Cooper, Wright, Matthew.  2020.  Weaponizing Unicodes with Deep Learning -Identifying Homoglyphs with Weakly Labeled Data. 2020 IEEE International Conference on Intelligence and Security Informatics (ISI). :1–6.
Visually similar characters, or homoglyphs, can be used to perform social engineering attacks or to evade spam and plagiarism detectors. It is thus important to understand the capabilities of an attacker to identify homoglyphs - particularly ones that have not been previously spotted - and leverage them in attacks. We investigate a deep-learning model using embedding learning, transfer learning, and augmentation to determine the visual similarity of characters and thereby identify potential homoglyphs. Our approach uniquely takes advantage of weak labels that arise from the fact that most characters are not homoglyphs. Our model drastically outperforms the Normal-ized Compression Distance approach on pairwise homoglyph identification, for which we achieve an average precision of 0.97. We also present the first attempt at clustering homoglyphs into sets of equivalence classes, which is more efficient than pairwise information for security practitioners to quickly lookup homoglyphs or to normalize confusable string encodings. To measure clustering performance, we propose a metric (mBIOU) building on the classic Intersection-Over-Union (IOU) metric. Our clustering method achieves 0.592 mBIOU, compared to 0.430 for the naive baseline. We also use our model to predict over 8,000 previously unknown homoglyphs, and find good early indications that many of these may be true positives. Source code and list of predicted homoglyphs are uploaded to Github: https://github.com/PerryXDeng/weaponizing\_unicode.
Chen, Jianbo, Jordan, Michael I., Wainwright, Martin J..  2020.  HopSkipJumpAttack: A Query-Efficient Decision-Based Attack. 2020 IEEE Symposium on Security and Privacy (SP). :1277–1294.
The goal of a decision-based adversarial attack on a trained model is to generate adversarial examples based solely on observing output labels returned by the targeted model. We develop HopSkipJumpAttack, a family of algorithms based on a novel estimate of the gradient direction using binary information at the decision boundary. The proposed family includes both untargeted and targeted attacks optimized for $\mathscrl$ and $\mathscrlınfty$ similarity metrics respectively. Theoretical analysis is provided for the proposed algorithms and the gradient direction estimate. Experiments show HopSkipJumpAttack requires significantly fewer model queries than several state-of-the-art decision-based adversarial attacks. It also achieves competitive performance in attacking several widely-used defense mechanisms.
Sultana, Kazi Zakia, Codabux, Zadia, Williams, Byron.  2020.  Examining the Relationship of Code and Architectural Smells with Software Vulnerabilities. 2020 27th Asia-Pacific Software Engineering Conference (APSEC). :31–40.
Context: Security is vital to software developed for commercial or personal use. Although more organizations are realizing the importance of applying secure coding practices, in many of them, security concerns are not known or addressed until a security failure occurs. The root cause of security failures is vulnerable code. While metrics have been used to predict software vulnerabilities, we explore the relationship between code and architectural smells with security weaknesses. As smells are surface indicators of a deeper problem in software, determining the relationship between smells and software vulnerabilities can play a significant role in vulnerability prediction models. Objective: This study explores the relationship between smells and software vulnerabilities to identify the smells. Method: We extracted the class, method, file, and package level smells for three systems: Apache Tomcat, Apache CXF, and Android. We then compared their occurrences in the vulnerable classes which were reported to contain vulnerable code and in the neutral classes (non-vulnerable classes where no vulnerability had yet been reported). Results: We found that a vulnerable class is more likely to have certain smells compared to a non-vulnerable class. God Class, Complex Class, Large Class, Data Class, Feature Envy, Brain Class have a statistically significant relationship with software vulnerabilities. We found no significant relationship between architectural smells and software vulnerabilities. Conclusion: We can conclude that for all the systems examined, there is a statistically significant correlation between software vulnerabilities and some smells.
Ivaki, Naghmeh, Antunes, Nuno.  2020.  SIDE: Security-Aware Integrated Development Environment. 2020 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW). :149–150.
An effective way for building secure software is to embed security into software in the early stages of software development. Thus, we aim to study several evidences of code anomalies introduced during the software development phase, that may be indicators of security issues in software, such as code smells, structural complexity represented by diverse software metrics, the issues detected by static code analysers, and finally missing security best practices. To use such evidences for vulnerability prediction and removal, we first need to understand how they are correlated with security issues. Then, we need to discover how these imperfect raw data can be integrated to achieve a reliable, accurate and valuable decision about a portion of code. Finally, we need to construct a security actuator providing suggestions to the developers to remove or fix the detected issues from the code. All of these will lead to the construction of a framework, including security monitoring, security analyzer, and security actuator platforms, that are necessary for a security-aware integrated development environment (SIDE).
Franchina, L., Socal, A..  2020.  Innovative Predictive Model for Smart City Security Risk Assessment. 2020 43rd International Convention on Information, Communication and Electronic Technology (MIPRO). :1831–1836.
In a Smart City, new technologies such as big data analytics, data fusion and artificial intelligence will increase awareness by measuring many phenomena and storing a huge amount of data. 5G will allow communication of these data among different infrastructures instantaneously. In a Smart City, security aspects are going to be a major concern. Some drawbacks, such as vulnerabilities of a highly integrated system and information overload, must be considered. To overcome these downsides, an innovative predictive model for Smart City security risk assessment has been developed. Risk metrics and indicators are defined by considering data coming from a wide range of sensors. An innovative ``what if'' algorithm is introduced to identify critical infrastructures functional relationship. Therefore, it is possible to evaluate the effects of an incident that involves one infrastructure over the others.
2021-10-04
Alsoghyer, Samah, Almomani, Iman.  2020.  On the Effectiveness of Application Permissions for Android Ransomware Detection. 2020 6th Conference on Data Science and Machine Learning Applications (CDMA). :94–99.
Ransomware attack is posting a serious threat against Android devices and stored data that could be locked or/and encrypted by such attack. Existing solutions attempt to detect and prevent such attack by studying different features and applying various analysis mechanisms including static, dynamic or both. In this paper, recent ransomware detection solutions were investigated and compared. Moreover, a deep analysis of android permissions was conducted to identify significant android permissions that can discriminate ransomware with high accuracy before harming users' devices. Consequently, based on the outcome of this analysis, a permissions-based ransomware detection system is proposed. Different classifiers were tested to build the prediction model of this detection system. After the evaluation of the ransomware detection service, the results revealed high detection rate that reached 96.9%. Additionally, the newly permission-based android dataset constructed in this research will be made available to researchers and developers for future work.
2021-09-30
Peng, Cheng, Yongli, Wang, Boyi, Yao, Yuanyuan, Huang, Jiazhong, Lu, Qiao, Peng.  2020.  Cyber Security Situational Awareness Jointly Utilizing Ball K-Means and RBF Neural Networks. 2020 17th International Computer Conference on Wavelet Active Media Technology and Information Processing (ICCWAMTIP). :261–265.
Low accuracy and slow speed of predictions for cyber security situational awareness. This paper proposes a network security situational awareness model based on accelerated accurate k-means radial basis function (RBF) neural network, the model uses the ball k-means clustering algorithm to cluster the input samples, to get the nodes of the hidden layer of the RBF neural network, speeding up the selection of the initial center point of the RBF neural network, and optimize the parameters of the RBF neural network structure. Finally, use the training data set to train the neural network, using the test data set to test the accuracy of this neural network structure, the results show that this method has a greater improvement in training speed and accuracy than other neural networks.
2021-09-21
bin Asad, Ashub, Mansur, Raiyan, Zawad, Safir, Evan, Nahian, Hossain, Muhammad Iqbal.  2020.  Analysis of Malware Prediction Based on Infection Rate Using Machine Learning Techniques. 2020 IEEE Region 10 Symposium (TENSYMP). :706–709.
In this modern, technological age, the internet has been adopted by the masses. And with it, the danger of malicious attacks by cybercriminals have increased. These attacks are done via Malware, and have resulted in billions of dollars of financial damage. This makes the prevention of malicious attacks an essential part of the battle against cybercrime. In this paper, we are applying machine learning algorithms to predict the malware infection rates of computers based on its features. We are using supervised machine learning algorithms and gradient boosting algorithms. We have collected a publicly available dataset, which was divided into two parts, one being the training set, and the other will be the testing set. After conducting four different experiments using the aforementioned algorithms, it has been discovered that LightGBM is the best model with an AUC Score of 0.73926.
Wu, Qiang, Zhang, Jiliang.  2020.  CT PUF: Configurable Tristate PUF against Machine Learning Attacks. 2020 IEEE International Symposium on Circuits and Systems (ISCAS). :1–5.
Strong physical unclonable function (PUF) is a promising lightweight hardware security primitive for device authentication. However, it is vulnerable to machine learning attacks. This paper demonstrates that even a recently proposed dual-mode PUF is still can be broken. In order to improve the security, this paper proposes a highly flexible machine learning resistant configurable tristate (CT) PUF which utilizes the response generated in the working state of Arbiter PUF to XOR the challenge input and response output of other two working states (ring oscillator (RO) PUF and bitable ring (BR) PUF). The proposed CT PUF is implemented on Xilinx Artix-7 FPGAs and the experiment results show that the modeling accuracy of logistic regression and artificial neural network is reduced to the mid-50%.
Zhao, Quanling, Sun, Jiawei, Ren, Hongjia, Sun, Guodong.  2020.  Machine-Learning Based TCP Security Action Prediction. 2020 5th International Conference on Mechanical, Control and Computer Engineering (ICMCCE). :1329–1333.
With the continuous growth of Internet technology and the increasingly broadening applications of The Internet, network security incidents as well as cyber-attacks are also showing a growing trend. Consequently, computer network security is becoming increasingly important. TCP firewall is a computer network security system, and it allows or denies the transmission of data according to specific rules for providing security for the computer network. Traditional firewalls rely on network administrators to set security rules for them, and network administrators sometimes need to choose to allow and deny packets to keep computer networks secure. However, due to the huge amount of data on the Internet, network administrators have a huge task. Therefore, it is particularly important to solve this problem by using the machine learning method of computer technology. This study aims to predict TCP security action based on the TCP transmission characteristics dataset provided by UCI machine learning repository by implementing machine learning models such as neural network, support vector machine (SVM), AdaBoost, and Logistic regression. Processes including evaluating various models and interpretability analysis. By utilizing the idea of ensemble-learning, the final result has an accuracy score of over 98%.
2021-08-17
Zheng, Gang, Xu, Xinzhong, Wang, Chao.  2020.  An Effective Target Address Generation Method for IPv6 Address Scan. 2020 IEEE 6th International Conference on Computer and Communications (ICCC). :73–77.
In recent years, IPv6 and its application are more and more widely deployed. Most network devices support and open IPv6 protocol stack. The security of IPv6 network is also concerned. In the IPv6 network security technology, address scanning is a key and difficult point. This paper presents a TGAs-based IPv6 address scanning method. It takes the known alive IPv6 addresses as input, and then utilizes the information entropy and clustering technology to mine the distribution law of seed addresses. Then, the final optimized target address set can be obtained by expanding from the seed address set according to the distribution law. Experimental results show that it can effectively improve the efficiency of IPv6 address scanning.
Alenezi, Freeh, Tsokos, Chris P..  2020.  Machine Learning Approach to Predict Computer Operating Systems Vulnerabilities. 2020 3rd International Conference on Computer Applications Information Security (ICCAIS). :1—6.
Information security is everyone's concern. Computer systems are used to store sensitive data. Any weakness in their reliability and security makes them vulnerable. The Common Vulnerability Scoring System (CVSS) is a commonly used scoring system, which helps in knowing the severity of a software vulnerability. In this research, we show the effectiveness of common machine learning algorithms in predicting the computer operating systems security using the published vulnerability data in Common Vulnerabilities and Exposures and National Vulnerability Database repositories. The Random Forest algorithm has the best performance, compared to other algorithms, in predicting the computer operating system vulnerability severity levels based on precision, recall, and F-measure evaluation metrics. In addition, a predictive model was developed to predict whether a newly discovered computer operating system vulnerability would allow attackers to cause denial of service to the subject system.
2021-08-05
Alecakir, Huseyin, Kabukcu, Muhammet, Can, Burcu, Sen, Sevil.  2020.  Discovering Inconsistencies between Requested Permissions and Application Metadata by using Deep Learning. 2020 International Conference on Information Security and Cryptology (ISCTURKEY). :56—56.
Android gives us opportunity to extract meaningful information from metadata. From the security point of view, the missing important information in metadata of an application could be a sign of suspicious application, which could be directed for extensive analysis. Especially the usage of dangerous permissions is expected to be explained in app descriptions. The permission-to-description fidelity problem in the literature aims to discover such inconsistencies between the usage of permissions and descriptions. This study proposes a new method based on natural language processing and recurrent neural networks. The effect of user reviews on finding such inconsistencies is also investigated in addition to application descriptions. The experimental results show that high precision is obtained by the proposed solution, and the proposed method could be used for triage of Android applications.
2021-08-02
Peng, Ye, Fu, Guobin, Luo, Yingguang, Yu, Qi, Li, Bin, Hu, Jia.  2020.  A Two-Layer Moving Target Defense for Image Classification in Adversarial Environment. 2020 IEEE 6th International Conference on Computer and Communications (ICCC). :410—414.
Deep learning plays an increasingly important role in various fields due to its superior performance, and it also achieves advanced recognition performance in the field of image classification. However, the vulnerability of deep learning in the adversarial environment cannot be ignored, and the prediction result of the model is likely to be affected by the small perturbations added to the samples by the adversary. In this paper, we propose a two-layer dynamic defense method based on defensive techniques pool and retrained branch model pool. First, we randomly select defense methods from the defense pool to process the input. The perturbation ability of the adversarial samples preprocessed by different defense methods changed, which would produce different classification results. In addition, we conduct adversarial training based on the original model and dynamically generate multiple branch models. The classification results of these branch models for the same adversarial sample is inconsistent. We can detect the adversarial samples by using the inconsistencies in the output results of the two layers. The experimental results show that the two-layer dynamic defense method we designed achieves a good defense effect.
2021-06-30
Asyaev, G. D., Antyasov, I. S..  2020.  Model for Providing Information Security of APCS Based on Predictive Maintenance Technology. 2020 Global Smart Industry Conference (GloSIC). :287—290.
In article the basic criteria of quality of work of the automated control system of technological process (APCS) are considered, the analysis of critical moments and level of information safety of APCS is spent. The model of maintenance of information safety of APCS on the basis of technology of predictive maintenance with application of intellectual methods of data processing is offered. The model allows to generate the list of actions at detection of new kinds of the threats connected with destructive influences on object, proceeding from acceptability of predicted consequences of work of APCS. In article with use of the system analysis the complex model of the technical object of automation is developed, allowing to estimate consequences from realization of threats of information safety at various system levels of APCS.
2021-06-28
Wei, Wenqi, Liu, Ling, Loper, Margaret, Chow, Ka-Ho, Gursoy, Mehmet Emre, Truex, Stacey, Wu, Yanzhao.  2020.  Adversarial Deception in Deep Learning: Analysis and Mitigation. 2020 Second IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications (TPS-ISA). :236–245.
The burgeoning success of deep learning has raised the security and privacy concerns as more and more tasks are accompanied with sensitive data. Adversarial attacks in deep learning have emerged as one of the dominating security threats to a range of mission-critical deep learning systems and applications. This paper takes a holistic view to characterize the adversarial examples in deep learning by studying their adverse effect and presents an attack-independent countermeasure with three original contributions. First, we provide a general formulation of adversarial examples and elaborate on the basic principle for adversarial attack algorithm design. Then, we evaluate 15 adversarial attacks with a variety of evaluation metrics to study their adverse effects and costs. We further conduct three case studies to analyze the effectiveness of adversarial examples and to demonstrate their divergence across attack instances. We take advantage of the instance-level divergence of adversarial examples and propose strategic input transformation teaming defense. The proposed defense methodology is attack-independent and capable of auto-repairing and auto-verifying the prediction decision made on the adversarial input. We show that the strategic input transformation teaming defense can achieve high defense success rates and are more robust with high attack prevention success rates and low benign false-positive rates, compared to existing representative defense methods.
2021-06-01
Wang, Qi, Zhao, Weiliang, Yang, Jian, Wu, Jia, Zhou, Chuan, Xing, Qianli.  2020.  AtNE-Trust: Attributed Trust Network Embedding for Trust Prediction in Online Social Networks. 2020 IEEE International Conference on Data Mining (ICDM). :601–610.
Trust relationship prediction among people provides valuable supports for decision making, information dissemination, and product promotion in online social networks. Network embedding has achieved promising performance for link prediction by learning node representations that encode intrinsic network structures. However, most of the existing network embedding solutions cannot effectively capture the properties of a trust network that has directed edges and nodes with in/out links. Furthermore, there usually exist rich user attributes in trust networks, such as ratings, reviews, and the rated/reviewed items, which may exert significant impacts on the formation of trust relationships. It is still lacking a network embedding-based method that can adequately integrate these properties for trust prediction. In this work, we develop an AtNE-Trust model to address these issues. We firstly capture user embedding from both the trust network structures and user attributes. Then we design a deep multi-view representation learning module to further mine and fuse the obtained user embedding. Finally, a trust evaluation module is developed to predict the trust relationships between users. Representation learning and trust evaluation are optimized together to capture high-quality user embedding and make accurate predictions simultaneously. A set of experiments against the real-world datasets demonstrates the effectiveness of the proposed approach.
2021-05-25
Cai, Feiyang, Li, Jiani, Koutsoukos, Xenofon.  2020.  Detecting Adversarial Examples in Learning-Enabled Cyber-Physical Systems using Variational Autoencoder for Regression. 2020 IEEE Security and Privacy Workshops (SPW). :208–214.

Learning-enabled components (LECs) are widely used in cyber-physical systems (CPS) since they can handle the uncertainty and variability of the environment and increase the level of autonomy. However, it has been shown that LECs such as deep neural networks (DNN) are not robust and adversarial examples can cause the model to make a false prediction. The paper considers the problem of efficiently detecting adversarial examples in LECs used for regression in CPS. The proposed approach is based on inductive conformal prediction and uses a regression model based on variational autoencoder. The architecture allows to take into consideration both the input and the neural network prediction for detecting adversarial, and more generally, out-of-distribution examples. We demonstrate the method using an advanced emergency braking system implemented in an open source simulator for self-driving cars where a DNN is used to estimate the distance to an obstacle. The simulation results show that the method can effectively detect adversarial examples with a short detection delay.

2021-05-18
Niloy, Nishat Tasnim, Islam, Md. Shariful.  2020.  IntellCache: An Intelligent Web Caching Scheme for Multimedia Contents. 2020 Joint 9th International Conference on Informatics, Electronics Vision (ICIEV) and 2020 4th International Conference on Imaging, Vision Pattern Recognition (icIVPR). :1–6.
The traditional reactive web caching system is getting less popular day by day due to its inefficiency in handling the overwhelming requests for multimedia content. An intelligent web caching system intends to take optimal cache decisions by predicting future popular contents (FPC) proactively. In recent years, a few approaches have proposed some intelligent caching system where they were concerned about proactive caching. Those works intensified the importance of FPC prediction using the prediction models. However, only FPC prediction may not help to get the optimal solution in every scenario. In this paper, a technique named IntellCache has been proposed that increases the caching efficiency by taking a cache decision i.e. content storing decision before storing the predicted FPC. Different deep learning models such as- multilayer perceptron (MLP), Long short-term memory (LSTM) of Recurrent Neural Network (RNN) and ConvLSTM a combination of LSTM and Convolutional Neural Network (CNN) are compared to identify the most efficient model for FPC. The information on the contents of 18 years from the MovieLens data repository has been mined to evaluate the proposed approach. Results show that this proposed scheme outperforms previous solutions by achieving a higher cache hit ratio and lower average delay and thus, ensures users' satisfaction.
2021-05-13
Feng, Xiaohua, Feng, Yunzhong, Dawam, Edward Swarlat.  2020.  Artificial Intelligence Cyber Security Strategy. 2020 IEEE Intl Conf on Dependable, Autonomic and Secure Computing, Intl Conf on Pervasive Intelligence and Computing, Intl Conf on Cloud and Big Data Computing, Intl Conf on Cyber Science and Technology Congress (DASC/PiCom/CBDCom/CyberSciTech). :328—333.
Nowadays, STEM (science, technology, engineering and mathematics) have never been treated so seriously before. Artificial Intelligence (AI) has played an important role currently in STEM. Under the 2020 COVID-19 pandemic crisis, coronavirus disease across over the world we are living in. Every government seek advices from scientist before making their strategic plan. Most of countries collect data from hospitals (and care home and so on in the society), carried out data analysis, using formula to make some AI models, to predict the potential development patterns, in order to make their government strategy. AI security become essential. If a security attack make the pattern wrong, the model is not a true prediction, that could result in thousands life loss. The potential consequence of this non-accurate forecast would be even worse. Therefore, take security into account during the forecast AI modelling, step-by-step data governance, will be significant. Cyber security should be applied during this kind of prediction process using AI deep learning technology and so on. Some in-depth discussion will follow.AI security impact is a principle concern in the world. It is also significant for both nature science and social science researchers to consider in the future. In particular, because many services are running on online devices, security defenses are essential. The results should have properly data governance with security. AI security strategy should be up to the top priority to influence governments and their citizens in the world. AI security will help governments' strategy makers to work reasonably balancing between technologies, socially and politics. In this paper, strategy related challenges of AI and Security will be discussed, along with suggestions AI cyber security and politics trade-off consideration from an initial planning stage to its near future further development.
Ho, Tsung-Yu, Chen, Wei-An, Huang, Chiung-Ying.  2020.  The Burden of Artificial Intelligence on Internal Security Detection. 2020 IEEE 17th International Conference on Smart Communities: Improving Quality of Life Using ICT, IoT and AI (HONET). :148—150.
Our research team have devoted to extract internal malicious behavior by monitoring the network traffic for many years. We applied the deep learning approach to recognize the malicious patterns within network, but this methodology may lead to more works to examine the results from AI models production. Hence, this paper addressed the scenario to consider the burden of AI, and proposed an idea for long-term reliable detection in the future work.