Visible to the public Biblio

Found 150 results

Filters: Keyword is artificial intelligence  [Clear All Filters]
2021-05-05
Poudyal, Subash, Dasgupta, Dipankar.  2020.  AI-Powered Ransomware Detection Framework. 2020 IEEE Symposium Series on Computational Intelligence (SSCI). :1154—1161.

Ransomware attacks are taking advantage of the ongoing pandemics and attacking the vulnerable systems in business, health sector, education, insurance, bank, and government sectors. Various approaches have been proposed to combat ransomware, but the dynamic nature of malware writers often bypasses the security checkpoints. There are commercial tools available in the market for ransomware analysis and detection, but their performance is questionable. This paper aims at proposing an AI-based ransomware detection framework and designing a detection tool (AIRaD) using a combination of both static and dynamic malware analysis techniques. Dynamic binary instrumentation is done using PIN tool, function call trace is analyzed leveraging Cuckoo sandbox and Ghidra. Features extracted at DLL, function call, and assembly level are processed with NLP, association rule mining techniques and fed to different machine learning classifiers. Support vector machine and Adaboost with J48 algorithms achieved the highest accuracy of 99.54% with 0.005 false-positive rates for a multi-level combined term frequency approach.

2021-04-28
Chaudhry, Y. S., Sharma, U., Rana, A..  2020.  Enhancing Security Measures of AI Applications. 2020 8th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO). :713—716.
Artificial Intelligence also often referred to as machine learning is being labelled to as the future has been into light since more than a decade. Artificial Intelligence designated by the acronym AI has a vast scope of development and the developers have been working on with it constantly. AI is being associated with the existing objects in the world as well as with the ones that are about to arrive to improve them and make them more reliable. AI as it states in its name is intelligence, intelligence shown by the machines to work similar to humans and work on achieving the goals they are being provided with. Another application of AI could be to provide defenses against the present cyber threats, vehicle overrides etc. Also, AI might be intelligence but, in the end, it's still a bunch of codes, hence it is prone to be corrupted or misused by the world. To prevent the misuse of the technologies, it is necessary to deploy them with a sustainable defensive system as well. Obviously, there is going to be a default defense system but it is prone to be corrupted by the hackers or malfunctioning of the intelligence in certain scenarios which can result disastrous especially in case of Robotics. A proposal referred to as the “Guard Masking” has been offered in the following paper, to provide an alternative for securing Artificial Intelligence.
2021-04-27
Piplai, A., Ranade, P., Kotal, A., Mittal, S., Narayanan, S. N., Joshi, A..  2020.  Using Knowledge Graphs and Reinforcement Learning for Malware Analysis. 2020 IEEE International Conference on Big Data (Big Data). :2626—2633.

Machine learning algorithms used to detect attacks are limited by the fact that they cannot incorporate the back-ground knowledge that an analyst has. This limits their suitability in detecting new attacks. Reinforcement learning is different from traditional machine learning algorithms used in the cybersecurity domain. Compared to traditional ML algorithms, reinforcement learning does not need a mapping of the input-output space or a specific user-defined metric to compare data points. This is important for the cybersecurity domain, especially for malware detection and mitigation, as not all problems have a single, known, correct answer. Often, security researchers have to resort to guided trial and error to understand the presence of a malware and mitigate it.In this paper, we incorporate prior knowledge, represented as Cybersecurity Knowledge Graphs (CKGs), to guide the exploration of an RL algorithm to detect malware. CKGs capture semantic relationships between cyber-entities, including that mined from open source. Instead of trying out random guesses and observing the change in the environment, we aim to take the help of verified knowledge about cyber-attack to guide our reinforcement learning algorithm to effectively identify ways to detect the presence of malicious filenames so that they can be deleted to mitigate a cyber-attack. We show that such a guided system outperforms a base RL system in detecting malware.

Vuppalapati, C., Ilapakurti, A., Kedari, S., Vuppalapati, R., Vuppalapati, J., Kedari, S..  2020.  The Role of Combinatorial Mathematical Optimization and Heuristics to improve Small Farmers to Veterinarian access and to create a Sustainable Food Future for the World. 2020 Fourth World Conference on Smart Trends in Systems, Security and Sustainability (WorldS4). :214–221.
The Global Demand for agriculture and dairy products is rising. Demand is expected to double by 2050. This will challenge agriculture markets in a way we have not seen before. For instance, unprecedented demand to increase in dairy farm productivity of already shrinking farms, untethered perpetual access to veterinarians by small dairy farms, economic engines of the developing countries, for animal husbandry and, finally, unprecedented need to increase productivity of veterinarians who're already understaffed, over-stressed, resource constrained to meet the current global dairy demands. The lack of innovative solutions to address the challenge would result in a major obstacle to achieve sustainable food future and a colossal roadblock ending economic disparities. The paper proposes a novel innovative data driven framework cropped by data generated using dairy Sensors and by mathematical formulations using Solvers to generate an exclusive veterinarian daily farms prioritized visit list so as to have a greater coverage of the most needed farms performed in-time and improve small farmers access to veterinarians, a precious and highly shortage & stressed resource.
2021-04-09
Mishra, A., Yadav, P..  2020.  Anomaly-based IDS to Detect Attack Using Various Artificial Intelligence Machine Learning Algorithms: A Review. 2nd International Conference on Data, Engineering and Applications (IDEA). :1—7.
Cyber-attacks are becoming more complex & increasing tasks in accurate intrusion detection (ID). Failure to avoid intrusion can reduce the reliability of security services, for example, integrity, Privacy & availability of data. The rapid proliferation of computer networks (CNs) has reformed the perception of network security. Easily accessible circumstances affect computer networks from many threats by hackers. Threats to a network are many & hypothetically devastating. Researchers have recognized an Intrusion Detection System (IDS) up to identifying attacks into a wide variety of environments. Several approaches to intrusion detection, usually identified as Signature-based Intrusion Detection Systems (SIDS) & Anomaly-based Intrusion Detection Systems (AIDS), were proposed in the literature to address computer safety hazards. This survey paper grants a review of current IDS, complete analysis of prominent new works & generally utilized dataset to evaluation determinations. It also introduces avoidance techniques utilized by attackers to avoid detection. This paper delivers a description of AIDS for attack detection. IDS is an applied research area in artificial intelligence (AI) that uses multiple machine learning algorithms.
Fourastier, Y., Baron, C., Thomas, C., Esteban, P..  2020.  Assurance levels for decision making in autonomous intelligent systems and their safety. 2020 IEEE 11th International Conference on Dependable Systems, Services and Technologies (DESSERT). :475—483.
The autonomy of intelligent systems and their safety rely on their ability for local decision making based on collected environmental information. This is even more for cyber-physical systems running safety critical activities. While this intelligence is partial and fragmented, and cognitive techniques are of limited maturity, the decision function must produce results whose validity and scope must be weighted in light of the underlying assumptions, unavoidable uncertainty and hypothetical safety limitation. Besides the cognitive techniques dependability, it is about the assurance level of the decision self-making. Beyond the pure decision-making capabilities of the autonomous intelligent system, we need techniques that guarantee the system assurance required for the intended use. Security mechanisms for cognitive systems may be consequently tightly intricated. We propose a trustworthiness module which is part of the system and its resulting safety. In this paper, we briefly review the state of the art regarding the dependability of cognitive techniques, the assurance level definition in this context, and related engineering practices. We elaborate regarding the design of autonomous intelligent systems safety, then we discuss its security design and approaches for the mitigation of safety violations by the cognitive functions.
2021-03-29
Kummerow, A., Monsalve, C., Rösch, D., Schäfer, K., Nicolai, S..  2020.  Cyber-physical data stream assessment incorporating Digital Twins in future power systems. 2020 International Conference on Smart Energy Systems and Technologies (SEST). :1—6.

Reliable and secure grid operations become more and more challenging in context of increasing IT/OT convergence and decreasing dynamic margins in today's power systems. To ensure the correct operation of monitoring and control functions in control centres, an intelligent assessment of the different information sources is necessary to provide a robust data source in case of critical physical events as well as cyber-attacks. Within this paper, a holistic data stream assessment methodology is proposed using an expert knowledge based cyber-physical situational awareness for different steady and transient system states. This approach goes beyond existing techniques by combining high-resolution PMU data with SCADA information as well as Digital Twin and AI based anomaly detection functionalities.

Schiliro, F., Moustafa, N., Beheshti, A..  2020.  Cognitive Privacy: AI-enabled Privacy using EEG Signals in the Internet of Things. 2020 IEEE 6th International Conference on Dependability in Sensor, Cloud and Big Data Systems and Application (DependSys). :73—79.

With the advent of Industry 4.0, the Internet of Things (IoT) and Artificial Intelligence (AI), smart entities are now able to read the minds of users via extracting cognitive patterns from electroencephalogram (EEG) signals. Such brain data may include users' experiences, emotions, motivations, and other previously private mental and psychological processes. Accordingly, users' cognitive privacy may be violated and the right to cognitive privacy should protect individuals against the unconsented intrusion by third parties into the brain data as well as against the unauthorized collection of those data. This has caused a growing concern among users and industry experts that laws to protect the right to cognitive liberty, right to mental privacy, right to mental integrity, and the right to psychological continuity. In this paper, we propose an AI-enabled EEG model, namely Cognitive Privacy, that aims to protect data and classifies users and their tasks from EEG data. We present a model that protects data from disclosure using normalized correlation analysis and classifies subjects (i.e., a multi-classification problem) and their tasks (i.e., eye open and eye close as a binary classification problem) using a long-short term memory (LSTM) deep learning approach. The model has been evaluated using the EEG data set of PhysioNet BCI, and the results have revealed its high performance of classifying users and their tasks with achieving high data privacy.

Singh, S., Nasoz, F..  2020.  Facial Expression Recognition with Convolutional Neural Networks. 2020 10th Annual Computing and Communication Workshop and Conference (CCWC). :0324—0328.

Emotions are a powerful tool in communication and one way that humans show their emotions is through their facial expressions. One of the challenging and powerful tasks in social communications is facial expression recognition, as in non-verbal communication, facial expressions are key. In the field of Artificial Intelligence, Facial Expression Recognition (FER) is an active research area, with several recent studies using Convolutional Neural Networks (CNNs). In this paper, we demonstrate the classification of FER based on static images, using CNNs, without requiring any pre-processing or feature extraction tasks. The paper also illustrates techniques to improve future accuracy in this area by using pre-processing, which includes face detection and illumination correction. Feature extraction is used to extract the most prominent parts of the face, including the jaw, mouth, eyes, nose, and eyebrows. Furthermore, we also discuss the literature review and present our CNN architecture, and the challenges of using max-pooling and dropout, which eventually aided in better performance. We obtained a test accuracy of 61.7% on FER2013 in a seven-classes classification task compared to 75.2% in state-of-the-art classification.

Pranav, E., Kamal, S., Chandran, C. Satheesh, Supriya, M. H..  2020.  Facial Emotion Recognition Using Deep Convolutional Neural Network. 2020 6th International Conference on Advanced Computing and Communication Systems (ICACCS). :317—320.

The rapid growth of artificial intelligence has contributed a lot to the technology world. As the traditional algorithms failed to meet the human needs in real time, Machine learning and deep learning algorithms have gained great success in different applications such as classification systems, recommendation systems, pattern recognition etc. Emotion plays a vital role in determining the thoughts, behaviour and feeling of a human. An emotion recognition system can be built by utilizing the benefits of deep learning and different applications such as feedback analysis, face unlocking etc. can be implemented with good accuracy. The main focus of this work is to create a Deep Convolutional Neural Network (DCNN) model that classifies 5 different human facial emotions. The model is trained, tested and validated using the manually collected image dataset.

Agirre, I..  2020.  Safe and secure software updates on high-performance embedded systems. 2020 50th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W). :68—69.

The next generation of dependable embedded systems feature autonomy and higher levels of interconnection. Autonomy is commonly achieved with the support of artificial intelligence algorithms that pose high computing demands on the hardware platform, reaching a high performance scale. This involves a dramatic increase in software and hardware complexity, fact that together with the novelty of the technology, raises serious concerns regarding system dependability. Traditional approaches for certification require to demonstrate that the system will be acceptably safe to operate before it is deployed into service. The nature of autonomous systems, with potentially infinite scenarios, configurations and unanticipated interactions, makes it increasingly difficult to support such claim at design time. In this context, the extended networking technologies can be exploited to collect post-deployment evidence that serve to oversee whether safety assumptions are preserved during operation and to continuously improve the system through regular software updates. These software updates are not only convenient for critical bug fixing but also necessary for keeping the interconnected system resilient against security threats. However, such approach requires a recondition of the traditional certification practices.

2021-03-09
Bronzin, T., Prole, B., Stipić, A., Pap, K..  2020.  Individualization of Anonymous Identities Using Artificial Intelligence (AI). 2020 43rd International Convention on Information, Communication and Electronic Technology (MIPRO). :1058–1063.

Individualization of anonymous identities using artificial intelligence - enables innovative human-computer interaction through the personalization of communication which is, at the same time, individual and anonymous. This paper presents possible approach for individualization of anonymous identities in real time. It uses computer vision and artificial intelligence to automatically detect and recognize person's age group, gender, human body measures, proportions and other specific personal characteristics. Collected data constitutes the so-called person's biometric footprint and are linked to a unique (but still anonymous) identity that is recorded in the computer system, along with other information that make up the profile of the person. Identity anonymization can be achieved by appropriate asymmetric encryption of the biometric footprint (with no additional personal information being stored) and integrity can be ensured using blockchain technology. Data collected in this manner is GDPR compliant.

2021-03-04
Carrozzo, G., Siddiqui, M. S., Betzler, A., Bonnet, J., Perez, G. M., Ramos, A., Subramanya, T..  2020.  AI-driven Zero-touch Operations, Security and Trust in Multi-operator 5G Networks: a Conceptual Architecture. 2020 European Conference on Networks and Communications (EuCNC). :254—258.
The 5G network solutions currently standardised and deployed do not yet enable the full potential of pervasive networking and computing envisioned in 5G initial visions: network services and slices with different QoS profiles do not span multiple operators; security, trust and automation is limited. The evolution of 5G towards a truly production-level stage needs to heavily rely on automated end-to-end network operations, use of distributed Artificial Intelligence (AI) for cognitive network orchestration and management and minimal manual interventions (zero-touch automation). All these elements are key to implement highly pervasive network infrastructures. Moreover, Distributed Ledger Technologies (DLT) can be adopted to implement distributed security and trust through Smart Contracts among multiple non-trusted parties. In this paper, we propose an initial concept of a zero-touch security and trust architecture for ubiquitous computing and connectivity in 5G networks. Our architecture aims at cross-domain security & trust orchestration mechanisms by coupling DLTs with AI-driven operations and service lifecycle automation in multi-tenant and multi-stakeholder environments. Three representative use cases are identified through which we will validate the work which will be validated in the test facilities at 5GBarcelona and 5TONIC/Madrid.
Abedin, N. F., Bawm, R., Sarwar, T., Saifuddin, M., Rahman, M. A., Hossain, S..  2020.  Phishing Attack Detection using Machine Learning Classification Techniques. 2020 3rd International Conference on Intelligent Sustainable Systems (ICISS). :1125—1130.

Phishing attacks are the most common form of attacks that can happen over the internet. This method involves attackers attempting to collect data of a user without his/her consent through emails, URLs, and any other link that leads to a deceptive page where a user is persuaded to commit specific actions that can lead to the successful completion of an attack. These attacks can allow an attacker to collect vital information of the user that can often allow the attacker to impersonate the victim and get things done that only the victim should have been able to do, such as carry out transactions, or message someone else, or simply accessing the victim's data. Many studies have been carried out to discuss possible approaches to prevent such attacks. This research work includes three machine learning algorithms to predict any websites' phishing status. In the experimentation these models are trained using URL based features and attempted to prevent Zero-Day attacks by using proposed software proposal that differentiates the legitimate websites and phishing websites by analyzing the website's URL. From observations, the random forest classifier performed with a precision of 97%, a recall 99%, and F1 Score is 97%. Proposed model is fast and efficient as it only works based on the URL and it does not use other resources for analysis, as was the case for past studies.

2021-03-01
Sarathy, N., Alsawwaf, M., Chaczko, Z..  2020.  Investigation of an Innovative Approach for Identifying Human Face-Profile Using Explainable Artificial Intelligence. 2020 IEEE 18th International Symposium on Intelligent Systems and Informatics (SISY). :155–160.
Human identification is a well-researched topic that keeps evolving. Advancement in technology has made it easy to train models or use ones that have been already created to detect several features of the human face. When it comes to identifying a human face from the side, there are many opportunities to advance the biometric identification research further. This paper investigates the human face identification based on their side profile by extracting the facial features and diagnosing the feature sets with geometric ratio expressions. These geometric ratio expressions are computed into feature vectors. The last stage involves the use of weighted means to measure similarity. This research addresses the problem of using an eXplainable Artificial Intelligence (XAI) approach. Findings from this research, based on a small data-set, conclude that the used approach offers encouraging results. Further investigation could have a significant impact on how face profiles can be identified. Performance of the proposed system is validated using metrics such as Precision, False Acceptance Rate, False Rejection Rate and True Positive Rate. Multiple simulations indicate an Equal Error Rate of 0.89.
Tao, J., Xiong, Y., Zhao, S., Xu, Y., Lin, J., Wu, R., Fan, C..  2020.  XAI-Driven Explainable Multi-view Game Cheating Detection. 2020 IEEE Conference on Games (CoG). :144–151.
Online gaming is one of the most successful applications having a large number of players interacting in an online persistent virtual world through the Internet. However, some cheating players gain improper advantages over normal players by using illegal automated plugins which has brought huge harm to game health and player enjoyment. Game industries have been devoting much efforts on cheating detection with multiview data sources and achieved great accuracy improvements by applying artificial intelligence (AI) techniques. However, generating explanations for cheating detection from multiple views still remains a challenging task. To respond to the different purposes of explainability in AI models from different audience profiles, we propose the EMGCD, the first explainable multi-view game cheating detection framework driven by explainable AI (XAI). It combines cheating explainers to cheating classifiers from different views to generate individual, local and global explanations which contributes to the evidence generation, reason generation, model debugging and model compression. The EMGCD has been implemented and deployed in multiple game productions in NetEase Games, achieving remarkable and trustworthy performance. Our framework can also easily generalize to other types of related tasks in online games, such as explainable recommender systems, explainable churn prediction, etc.
D’Alterio, P., Garibaldi, J. M., John, R. I..  2020.  Constrained Interval Type-2 Fuzzy Classification Systems for Explainable AI (XAI). 2020 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE). :1–8.
In recent year, there has been a growing need for intelligent systems that not only are able to provide reliable classifications but can also produce explanations for the decisions they make. The demand for increased explainability has led to the emergence of explainable artificial intelligence (XAI) as a specific research field. In this context, fuzzy logic systems represent a promising tool thanks to their inherently interpretable structure. The use of a rule-base and linguistic terms, in fact, have allowed researchers to create models that are able to produce explanations in natural language for each of the classifications they make. So far, however, designing systems that make use of interval type-2 (IT2) fuzzy logic and also give explanations for their outputs has been very challenging, partially due to the presence of the type-reduction step. In this paper, it will be shown how constrained interval type-2 (CIT2) fuzzy sets represent a valid alternative to conventional interval type-2 sets in order to address this issue. Through the analysis of two case studies from the medical domain, it is shown how explainable CIT2 classifiers are produced. These systems can explain which rules contributed to the creation of each of the endpoints of the output interval centroid, while showing (in these examples) the same level of accuracy as their IT2 counterpart.
Houzé, É, Diaconescu, A., Dessalles, J.-L., Mengay, D., Schumann, M..  2020.  A Decentralized Approach to Explanatory Artificial Intelligence for Autonomic Systems. 2020 IEEE International Conference on Autonomic Computing and Self-Organizing Systems Companion (ACSOS-C). :115–120.
While Explanatory AI (XAI) is attracting increasing interest from academic research, most AI-based solutions still rely on black box methods. This is unsuitable for certain domains, such as smart homes, where transparency is key to gaining user trust and solution adoption. Moreover, smart homes are challenging environments for XAI, as they are decentralized systems that undergo runtime changes. We aim to develop an XAI solution for addressing problems that an autonomic management system either could not resolve or resolved in a surprising manner. This implies situations where the current state of affairs is not what the user expected, hence requiring an explanation. The objective is to solve the apparent conflict between expectation and observation through understandable logical steps, thus generating an argumentative dialogue. While focusing on the smart home domain, our approach is intended to be generic and transferable to other cyber-physical systems offering similar challenges. This position paper focuses on proposing a decentralized algorithm, called D-CAN, and its corresponding generic decentralized architecture. This approach is particularly suited for SISSY systems, as it enables XAI functions to be extended and updated when devices join and leave the managed system dynamically. We illustrate our proposal via several representative case studies from the smart home domain.
Meskauskas, Z., Jasinevicius, R., Kazanavicius, E., Petrauskas, V..  2020.  XAI-Based Fuzzy SWOT Maps for Analysis of Complex Systems. 2020 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE). :1–8.
The classical SWOT methodology and many of the tools based on it used so far are very static, used for one stable project and lacking dynamics [1]. This paper proposes the idea of combining several SWOT analyses enriched with computing with words (CWW) paradigm into a single network. In this network, individual analysis of the situation is treated as the node. The whole structure is based on fuzzy cognitive maps (FCM) that have forward and backward chaining, so it is called fuzzy SWOT maps. Fuzzy SWOT maps methodology newly introduces the dynamics that projects are interacting, what exists in a real dynamic environment. The whole fuzzy SWOT maps network structure has explainable artificial intelligence (XAI) traits because each node in this network is a "white box"-all the reasoning chain can be tracked and checked why a particular decision has been made, which increases explainability by being able to check the rules to determine why a particular decision was made or why and how one project affects another. To confirm the vitality of the approach, a case with three interacting projects has been analyzed with a developed prototypical software tool and results are delivered.
Kuppa, A., Le-Khac, N.-A..  2020.  Black Box Attacks on Explainable Artificial Intelligence(XAI) methods in Cyber Security. 2020 International Joint Conference on Neural Networks (IJCNN). :1–8.

Cybersecurity community is slowly leveraging Machine Learning (ML) to combat ever evolving threats. One of the biggest drivers for successful adoption of these models is how well domain experts and users are able to understand and trust their functionality. As these black-box models are being employed to make important predictions, the demand for transparency and explainability is increasing from the stakeholders.Explanations supporting the output of ML models are crucial in cyber security, where experts require far more information from the model than a simple binary output for their analysis. Recent approaches in the literature have focused on three different areas: (a) creating and improving explainability methods which help users better understand the internal workings of ML models and their outputs; (b) attacks on interpreters in white box setting; (c) defining the exact properties and metrics of the explanations generated by models. However, they have not covered, the security properties and threat models relevant to cybersecurity domain, and attacks on explainable models in black box settings.In this paper, we bridge this gap by proposing a taxonomy for Explainable Artificial Intelligence (XAI) methods, covering various security properties and threat models relevant to cyber security domain. We design a novel black box attack for analyzing the consistency, correctness and confidence security properties of gradient based XAI methods. We validate our proposed system on 3 security-relevant data-sets and models, and demonstrate that the method achieves attacker's goal of misleading both the classifier and explanation report and, only explainability method without affecting the classifier output. Our evaluation of the proposed approach shows promising results and can help in designing secure and robust XAI methods.

2021-02-22
Martinelli, F., Marulli, F., Mercaldo, F., Marrone, S., Santone, A..  2020.  Enhanced Privacy and Data Protection using Natural Language Processing and Artificial Intelligence. 2020 International Joint Conference on Neural Networks (IJCNN). :1–8.

Artificial Intelligence systems have enabled significant benefits for users and society, but whilst the data for their feeding are always increasing, a side to privacy and security leaks is offered. The severe vulnerabilities to the right to privacy obliged governments to enact specific regulations to ensure privacy preservation in any kind of transaction involving sensitive information. In the case of digital and/or physical documents comprising sensitive information, the right to privacy can be preserved by data obfuscation procedures. The capability of recognizing sensitive information for obfuscation is typically entrusted to the experience of human experts, who are over-whelmed by the ever increasing amount of documents to process. Artificial intelligence could proficiently mitigate the effort of the human officers and speed up processes. Anyway, until enough knowledge won't be available in a machine readable format, automatic and effectively working systems can't be developed. In this work we propose a methodology for transferring and leveraging general knowledge across specific-domain tasks. We built, from scratch, specific-domain knowledge data sets, for training artificial intelligence models supporting human experts in privacy preserving tasks. We exploited a mixture of natural language processing techniques applied to unlabeled domain-specific documents corpora for automatically obtain labeled documents, where sensitive information are recognized and tagged. We performed preliminary tests just over 10.000 documents from the healthcare and justice domains. Human experts supported us during the validation. Results we obtained, estimated in terms of precision, recall and F1-score metrics across these two domains, were promising and encouraged us to further investigations.

2021-02-16
Kowalski, P., Zocholl, M., Jousselme, A.-L..  2020.  Explainability in threat assessment with evidential networks and sensitivity spaces. 2020 IEEE 23rd International Conference on Information Fusion (FUSION). :1—8.
One of the main threats to the underwater communication cables identified in the recent years is possible tampering or damage by malicious actors. This paper proposes a solution with explanation abilities to detect and investigate this kind of threat within the evidence theory framework. The reasoning scheme implements the traditional “opportunity-capability-intent” threat model to assess a degree to which a given vessel may pose a threat. The scenario discussed considers a variety of possible pieces of information available from different sources. A source quality model is used to reason with the partially reliable sources and the impact of this meta-information on the overall assessment is illustrated. Examples of uncertain relationships between the relevant variables are modelled and the constructed model is used to investigate the probability of threat of four vessels of different types. One of these cases is discussed in more detail to demonstrate the explanation abilities. Explanations about inference are provided thanks to sensitivity spaces in which the impact of the different pieces of information on the reasoning are compared.
Hongbin, Z., Wei, W., Wengdong, S..  2020.  Safety and Damage Assessment Method of Transmission Line Tower in Goaf Based on Artificial Intelligence. 2020 IEEE/IAS Industrial and Commercial Power System Asia (I CPS Asia). :1474—1479.
The transmission line tower is affected by the surface subsidence in the mined out area of coal mine, which will appear the phenomenon of subsidence, inclination and even tower collapse, threatening the operation safety of the transmission line tower in the mined out area. Therefore, a Safety and Damage Assessment Method of Transmission Line Tower in Goaf Based on Artificial Intelligence is proposed. Firstly, the geometric model of the coal seam in the goaf and the structural reliability model of the transmission line tower are constructed to evaluate the safety. Then, the random forest algorithm in artificial intelligence is used to evaluate the damage of the tower, so as to take protective measures in time. Finally, a finite element simulation model of tower foundation interaction is built, and its safety (force) and damage identification are experimentally analyzed. The results show that the proposed method can ensure high accuracy of damage assessment and reliable judgment of transmission line tower safety within the allowable error.
2021-02-10
Mishra, P., Gupta, C..  2020.  Cookies in a Cross-site scripting: Type, Utilization, Detection, Protection and Remediation. 2020 8th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO). :1056—1059.
In accordance to the annual report by the Cisco 2018, web applications are exposed to several security vulnerabilities that are exploited by hackers in various ways. It is becoming more and more frequent, specific and sophisticated. Of all the vulnerabilities, more than 40% of attempts are performed via cross-site scripting (XSS). A number of methods have been postulated to examine such vulnerabilities. Therefore, this paper attempted to address an overview of one such vulnerability: the cookies in the XSS. The objective is to present an overview of the cookies, it's type, vulnerability, policies, discovering, protecting and their mitigation via different tools/methods and via cryptography, artificial intelligence techniques etc. While some future issues, directions, challenges and future research challenges were also being discussed.
2021-02-03
Bellas, A., Perrin, S., Malone, B., Rogers, K., Lucas, G., Phillips, E., Tossell, C., Visser, E. d.  2020.  Rapport Building with Social Robots as a Method for Improving Mission Debriefing in Human-Robot Teams. 2020 Systems and Information Engineering Design Symposium (SIEDS). :160—163.

Conflicts may arise at any time during military debriefing meetings, especially in high intensity deployed settings. When such conflicts arise, it takes time to get everyone back into a receptive state of mind so that they engage in reflective discussion rather than unproductive arguing. It has been proposed by some that the use of social robots equipped with social abilities such as emotion regulation through rapport building may help to deescalate these situations to facilitate critical operational decisions. However, in military settings, the same AI agent used in the pre-brief of a mission may not be the same one used in the debrief. The purpose of this study was to determine whether a brief rapport-building session with a social robot could create a connection between a human and a robot agent, and whether consistency in the embodiment of the robot agent was necessary for maintaining this connection once formed. We report the results of a pilot study conducted at the United States Air Force Academy which simulated a military mission (i.e., Gravity and Strike). Participants' connection with the agent, sense of trust, and overall likeability revealed that early rapport building can be beneficial for military missions.