Visible to the public Biblio

Found 3854 results

Filters: Keyword is Human Behavior  [Clear All Filters]
2021-04-09
Mir, N., Khan, M. A. U..  2020.  Copyright Protection for Online Text Information : Using Watermarking and Cryptography. 2020 3rd International Conference on Computer Applications Information Security (ICCAIS). :1—4.
Information and security are interdependent elements. Information security has evolved to be a matter of global interest and to achieve this; it requires tools, policies and assurance of technologies against any relevant security risks. Internet influx while providing a flexible means of sharing the online information economically has rapidly attracted countless writers. Text being an important constituent of online information sharing, creates a huge demand of intellectual copyright protection of text and web itself. Various visible watermarking techniques have been studied for text documents but few for web-based text. In this paper, web page watermarking and cryptography for online content copyrights protection is proposed utilizing the semantic and syntactic rules using HTML (Hypertext Markup Language) and is tested for English and Arabic languages.
Peng, X., Hongmei, Z., Lijie, C., Ying, H..  2020.  Analysis of Computer Network Information Security under the Background of Big Data. 2020 5th International Conference on Smart Grid and Electrical Automation (ICSGEA). :409—412.
In today's society, under the comprehensive arrival of the Internet era, the rapid development of technology has facilitated people's production and life, but it is also a “double-edged sword”, making people's personal information and other data subject to a greater threat of abuse. The unique features of big data technology, such as massive storage, parallel computing and efficient query, have created a breakthrough opportunity for the key technologies of large-scale network security situational awareness. On the basis of big data acquisition, preprocessing, distributed computing and mining and analysis, the big data analysis platform provides information security assurance services to the information system. This paper will discuss the security situational awareness in large-scale network environment and the promotion of big data technology in security perception.
Bhattacharya, M. P., Zavarsky, P., Butakov, S..  2020.  Enhancing the Security and Privacy of Self-Sovereign Identities on Hyperledger Indy Blockchain. 2020 International Symposium on Networks, Computers and Communications (ISNCC). :1—7.
Self-sovereign identities provide user autonomy and immutability to individual identities and full control to their identity owners. The immutability and control are possible by implementing identities in a decentralized manner on blockchains that are specially designed for identity operations such as Hyperledger Indy. As with any type of identity, self-sovereign identities too deal with Personally Identifiable Information (PII) of the identity holders and comes with the usual risks of privacy and security. This study examined certain scenarios of personal data disclosure via credential exchanges between such identities and risks of man-in-the-middle attacks in the blockchain based identity system Hyperledger Indy. On the basis of the findings, the paper proposes the following enhancements: 1) A novel attribute sensitivity score model for self-sovereign identity agents to ascertain the sensitivity of attributes shared in credential exchanges 2) A method of mitigating man-in-the-middle attacks between peer self-sovereign identities and 3) A novel quantitative model for determining a credential issuer's reputation based on the number of issued credentials in a window period, which is then utilized to calculate an overall confidence level score for the issuer.
Lyshevski, S. E., Aved, A., Morrone, P..  2020.  Information-Centric Cyberattack Analysis and Spatiotemporal Networks Applied to Cyber-Physical Systems. 2020 IEEE Microwave Theory and Techniques in Wireless Communications (MTTW). 1:172—177.
Cyber-physical systems (CPS) depend on cybersecurity to ensure functionality, data quality, cyberattack resilience, etc. There are known and unknown cyber threats and attacks that pose significant risks. Information assurance and information security are critical. Many systems are vulnerable to intelligence exploitation and cyberattacks. By investigating cybersecurity risks and formal representation of CPS using spatiotemporal dynamic graphs and networks, this paper investigates topics and solutions aimed to examine and empower: (1) Cybersecurity capabilities; (2) Information assurance and system vulnerabilities; (3) Detection of cyber threat and attacks; (4) Situational awareness; etc. We introduce statistically-characterized dynamic graphs, novel entropy-centric algorithms and calculi which promise to ensure near-real-time capabilities.
Ozkan, N., Tarhan, A. K., Gören, B., Filiz, İ, Özer, E..  2020.  Harmonizing IT Frameworks and Agile Methods: Challenges and Solutions for the case of COBIT and Scrum. 2020 15th Conference on Computer Science and Information Systems (FedCSIS). :709—719.
Information Technology (IT) is a complex domain. In order to properly manage IT related processes, several frameworks including ITIL (Information Technologies Infrastructure Library), COBIT (Control OBjectives for Information and related Technologies), IT Service CMMI (IT Service Capability Maturity Model) and many others have emerged in recent decades. Meanwhile, the prevalence of Agile methods has increased, posing the coexistence of Agile approach with different IT frameworks already adopted in organizations. More specifically, the pursuit of being agile in the area of digitalization pushes organizations to go for agile transformation while preserving full compliance to IT frameworks for the sake of their survival. The necessity for this coexistence, however, brings its own challenges and solutions for harmonizing the requirements of both parties. In this paper, we focus on harmonizing the requirements of COBIT and Scrum in a same organization, which is especially challenging when a full compliance to COBIT is expected. Therefore, this study aims to identifying the challenges of and possible solutions for the coexistence of Scrum and COBIT (version 4.1 in this case) in an organization, by considering two case studies: one from the literature and the case of Akbank delivered in this study. Thus, it extends the corresponding previous case study from two points: adds one more case study to enrich the results from the previous case study and provides more opportunity to make generalization by considering two independent cases.
Lin, T., Shi, Y., Shu, N., Cheng, D., Hong, X., Song, J., Gwee, B. H..  2020.  Deep Learning-Based Image Analysis Framework for Hardware Assurance of Digital Integrated Circuits. 2020 IEEE International Symposium on the Physical and Failure Analysis of Integrated Circuits (IPFA). :1—6.
We propose an Artificial Intelligence (AI)/Deep Learning (DL)-based image analysis framework for hardware assurance of digital integrated circuits (ICs). Our aim is to examine and verify various hardware information from analyzing the Scanning Electron Microscope (SEM) images of an IC. In our proposed framework, we apply DL-based methods at all essential steps of the analysis. To the best of our knowledge, this is the first such framework that makes heavy use of DL-based methods at all essential analysis steps. Further, to reduce time and effort required in model re-training, we propose and demonstrate various automated or semi-automated training data preparation methods and demonstrate the effectiveness of using synthetic data to train a model. By applying our proposed framework to analyzing a set of SEM images of a large digital IC, we prove its efficacy. Our DL-based methods are fast, accurate, robust against noise, and can automate tasks that were previously performed mainly manually. Overall, we show that DL-based methods can largely increase the level of automation in hardware assurance of digital ICs and improve its accuracy.
Fourastier, Y., Baron, C., Thomas, C., Esteban, P..  2020.  Assurance levels for decision making in autonomous intelligent systems and their safety. 2020 IEEE 11th International Conference on Dependable Systems, Services and Technologies (DESSERT). :475—483.
The autonomy of intelligent systems and their safety rely on their ability for local decision making based on collected environmental information. This is even more for cyber-physical systems running safety critical activities. While this intelligence is partial and fragmented, and cognitive techniques are of limited maturity, the decision function must produce results whose validity and scope must be weighted in light of the underlying assumptions, unavoidable uncertainty and hypothetical safety limitation. Besides the cognitive techniques dependability, it is about the assurance level of the decision self-making. Beyond the pure decision-making capabilities of the autonomous intelligent system, we need techniques that guarantee the system assurance required for the intended use. Security mechanisms for cognitive systems may be consequently tightly intricated. We propose a trustworthiness module which is part of the system and its resulting safety. In this paper, we briefly review the state of the art regarding the dependability of cognitive techniques, the assurance level definition in this context, and related engineering practices. We elaborate regarding the design of autonomous intelligent systems safety, then we discuss its security design and approaches for the mitigation of safety violations by the cognitive functions.
Smith, B., Feather, M. S., Huntsberger, T., Bocchino, R..  2020.  Software Assurance of Autonomous Spacecraft Control. 2020 Annual Reliability and Maintainability Symposium (RAMS). :1—7.
Summary & Conclusions: The work described addresses assurance of a planning and execution software system being added to an in-orbit CubeSat to demonstrate autonomous control of that spacecraft. Our focus was on how to develop assurance of the correct operation of the added software in its operational context, our approach to which was to use an assurance case to guide and organize the information involved. The relatively manageable magnitude of the CubeSat and its autonomy demonstration experiment made it plausible to try out our assurance approach in a relatively short timeframe. Additionally, the time was ripe to inject useful assurance results into the ongoing development and testing of the autonomy demonstration. In conducting this, we sought to answer several questions about our assurance approach. The questions, and the conclusions we reached, are as follows: 1. Question: Would our approach to assurance apply to the introduction of a planning and execution software into an existing system? Conclusion: Yes. The use of an assurance case helped focus our attention on the more challenging aspects, notably the interactions between the added software and the existing software system into which it was being introduced. This guided us to choose a hazard analysis method specifically for software interactions. In addition, we were able to automate generation of assurance case elements from the hazard analysis' tabular representation. 2. Question: Would our methods prove understandable to the software engineers tasked with integrating the software into the CubeSat's existing system? Conclusion: Somewhat. In interim discussions with the software engineers we found the assurance case style, of decomposing an argument into smaller pieces, to be useful and understandable to organize discussion. Ultimately however we did not persuade them to adopt assurance cases as the means to present review information. We attribute this to reluctance to deviate from JPL's tried and true style of holding reviews. For the CubeSat project as a whole, hosting an autonomy demonstration was already a novelty. Combining this with presentation of review information via an assurance case, with which our reviewers would be unaccustomed, would have exacerbated the unfamiliarity. 3. Question: Would conducting our methods prove to be compatible with the (limited) time available of the software engineers? Conclusion: Yes. We used a series of six brief meetings (approximately one hour each) with the development team to first identify the interactions as the area on which to focus, and to then perform the hazard analysis on those interactions. We used the meetings to confirm, or correct as necessary, our understanding of the software system and the spacecraft context. Between meetings we studied the existing software documentation, did preliminary analyses by ourselves, and documented the results in a concise form suitable for discussion with the team. 4. Question: Would our methods yield useful results to the software engineers? Conclusion: Yes. The hazard analysis systematically confirmed existing hazards' mitigations, and drew attention to a mitigation whose implementation needed particular care. In some cases, the analysis identified potential hazards - and what to do about them - should some of the more sophisticated capabilities of the planning and execution software be used. These capabilities, not exercised in the initial experiments on the CubeSat, may be used in future experiments. We remain involved with the developers as they prepare for these future experiments, so our analysis results will be of benefit as these proceed.
2021-04-08
Boato, G., Dang-Nguyen, D., Natale, F. G. B. De.  2020.  Morphological Filter Detector for Image Forensics Applications. IEEE Access. 8:13549—13560.
Mathematical morphology provides a large set of powerful non-linear image operators, widely used for feature extraction, noise removal or image enhancement. Although morphological filters might be used to remove artifacts produced by image manipulations, both on binary and gray level documents, little effort has been spent towards their forensic identification. In this paper we propose a non-trivial extension of a deterministic approach originally detecting erosion and dilation of binary images. The proposed approach operates on grayscale images and is robust to image compression and other typical attacks. When the image is attacked the method looses its deterministic nature and uses a properly trained SVM classifier, using the original detector as a feature extractor. Extensive tests demonstrate that the proposed method guarantees very high accuracy in filtering detection, providing 100% accuracy in discriminating the presence and the type of morphological filter in raw images of three different datasets. The achieved accuracy is also good after JPEG compression, equal or above 76.8% on all datasets for quality factors above 80. The proposed approach is also able to determine the adopted structuring element for moderate compression factors. Finally, it is robust against noise addition and it can distinguish morphological filter from other filters.
Zhang, J., Liao, Y., Zhu, X., Wang, H., Ding, J..  2020.  A Deep Learning Approach in the Discrete Cosine Transform Domain to Median Filtering Forensics. IEEE Signal Processing Letters. 27:276—280.
This letter presents a novel median filtering forensics approach, based on a convolutional neural network (CNN) with an adaptive filtering layer (AFL), which is built in the discrete cosine transform (DCT) domain. Using the proposed AFL, the CNN can determine the main frequency range closely related with the operational traces. Then, to automatically learn the multi-scale manipulation features, a multi-scale convolutional block is developed, exploring a new multi-scale feature fusion strategy based on the maxout function. The resultant features are further processed by a convolutional stream with pooling and batch normalization operations, and finally fed into the classification layer with the Softmax function. Experimental results show that our proposed approach is able to accurately detect the median filtering manipulation and outperforms the state-of-the-art schemes, especially in the scenarios of low image resolution and serious compression loss.
Mayer, O., Stamm, M. C..  2020.  Forensic Similarity for Digital Images. IEEE Transactions on Information Forensics and Security. 15:1331—1346.
In this paper, we introduce a new digital image forensics approach called forensic similarity, which determines whether two image patches contain the same forensic trace or different forensic traces. One benefit of this approach is that prior knowledge, e.g., training samples, of a forensic trace is not required to make a forensic similarity decision on it in the future. To do this, we propose a two-part deep-learning system composed of a convolutional neural network-based feature extractor and a three-layer neural network, called the similarity network. This system maps the pairs of image patches to a score indicating whether they contain the same or different forensic traces. We evaluated the system accuracy of determining whether two image patches were captured by the same or different camera model and manipulated by the same or a different editing operation and the same or a different manipulation parameter, given a particular editing operation. Experiments demonstrate applicability to a variety of forensic traces and importantly show efficacy on “unknown” forensic traces that were not used to train the system. Experiments also show that the proposed system significantly improves upon prior art, reducing error rates by more than half. Furthermore, we demonstrated the utility of the forensic similarity approach in two practical applications: forgery detection and localization, and database consistency verification.
Rhee, K. H..  2020.  Composition of Visual Feature Vector Pattern for Deep Learning in Image Forensics. IEEE Access. 8:188970—188980.
In image forensics, to determine whether the image is impurely transformed, it extracts and examines the features included in the suspicious image. In general, the features extracted for the detection of forgery images are based on numerical values, so it is somewhat unreasonable to use in the CNN structure for image classification. In this paper, the extraction method of a feature vector is using a least-squares solution. Treat a suspicious image like a matrix and its solution to be coefficients as the feature vector. Get two solutions from two images of the original and its median filter residual (MFR). Subsequently, the two features were formed into a visualized pattern and then fed into CNN deep learning to classify the various transformed images. A new structure of the CNN net layer was also designed by hybrid with the inception module and the residual block to classify visualized feature vector patterns. The performance of the proposed image forensics detection (IFD) scheme was measured with the seven transformed types of image: average filtered (window size: 3 × 3), gaussian filtered (window size: 3 × 3), JPEG compressed (quality factor: 90, 70), median filtered (window size: 3 × 3, 5 × 5), and unaltered. The visualized patterns are fed into the image input layer of the designed CNN hybrid model. Throughout the experiment, the accuracy of median filtering detection was 98% over. Also, the area under the curve (AUC) by sensitivity (TP: true positive rate) and 1-specificity (FP: false positive rate) results of the proposed IFD scheme approached to `1' on the designed CNN hybrid model. Experimental results show high efficiency and performance to classify the various transformed images. Therefore, the grade evaluation of the proposed scheme is “Excellent (A)”.
Guerrini, F., Dalai, M., Leonardi, R..  2020.  Minimal Information Exchange for Secure Image Hash-Based Geometric Transformations Estimation. IEEE Transactions on Information Forensics and Security. 15:3482—3496.
Signal processing applications dealing with secure transmission are enjoying increasing attention lately. This paper provides some theoretical insights as well as a practical solution for transmitting a hash of an image to a central server to be compared with a reference image. The proposed solution employs a rigid image registration technique viewed in a distributed source coding perspective. In essence, it embodies a phase encoding framework to let the decoder estimate the transformation parameters using a very modest amount of information about the original image. The problem is first cast in an ideal setting and then it is solved in a realistic scenario, giving more prominence to low computational complexity in both the transmitter and receiver, minimal hash size, and hash security. Satisfactory experimental results are reported on a standard images set.
Zheng, Y., Cao, Y., Chang, C..  2020.  A PUF-Based Data-Device Hash for Tampered Image Detection and Source Camera Identification. IEEE Transactions on Information Forensics and Security. 15:620—634.
With the increasing prevalent of digital devices and their abuse for digital content creation, forgeries of digital images and video footage are more rampant than ever. Digital forensics is challenged into seeking advanced technologies for forgery content detection and acquisition device identification. Unfortunately, existing solutions that address image tampering problems fail to identify the device that produces the images or footage while techniques that can identify the camera is incapable of locating the tampered content of its captured images. In this paper, a new perceptual data-device hash is proposed to locate maliciously tampered image regions and identify the source camera of the received image data as a non-repudiable attestation in digital forensics. The presented image may have been either tampered or gone through benign content preserving geometric transforms or image processing operations. The proposed image hash is generated by projecting the invariant image features into a physical unclonable function (PUF)-defined Bernoulli random space. The tamper-resistant random PUF response is unique for each camera and can only be generated upon triggered by a challenge, which is provided by the image acquisition timestamp. The proposed hash is evaluated on the modified CASIA database and CMOS image sensor-based PUF simulated using 180 nm TSMC technology. It achieves a high tamper detection rate of 95.42% with the regions of tampered content successfully located, a good authentication performance of above 98.5% against standard content-preserving manipulations, and 96.25% and 90.42%, respectively, for the more challenging geometric transformations of rotation (0 360°) and scaling (scale factor in each dimension: 0.5). It is demonstrated to be able to identify the source camera with 100% accuracy and is secure against attacks on PUF.
Al-Dhaqm, A., Razak, S. A., Ikuesan, R. A., Kebande, V. R., Siddique, K..  2020.  A Review of Mobile Forensic Investigation Process Models. IEEE Access. 8:173359—173375.
Mobile Forensics (MF) field uses prescribed scientific approaches with a focus on recovering Potential Digital Evidence (PDE) from mobile devices leveraging forensic techniques. Consequently, increased proliferation, mobile-based services, and the need for new requirements have led to the development of the MF field, which has in the recent past become an area of importance. In this article, the authors take a step to conduct a review on Mobile Forensics Investigation Process Models (MFIPMs) as a step towards uncovering the MF transitions as well as identifying open and future challenges. Based on the study conducted in this article, a review of the literature revealed that there are a few MFIPMs that are designed for solving certain mobile scenarios, with a variety of concepts, investigation processes, activities, and tasks. A total of 100 MFIPMs were reviewed, to present an inclusive and up-to-date background of MFIPMs. Also, this study proposes a Harmonized Mobile Forensic Investigation Process Model (HMFIPM) for the MF field to unify and structure whole redundant investigation processes of the MF field. The paper also goes the extra mile to discuss the state of the art of mobile forensic tools, open and future challenges from a generic standpoint. The results of this study find direct relevance to forensic practitioners and researchers who could leverage the comprehensiveness of the developed processes for investigation.
Verdoliva, L..  2020.  Media Forensics and DeepFakes: An Overview. IEEE Journal of Selected Topics in Signal Processing. 14:910—932.
With the rapid progress in recent years, techniques that generate and manipulate multimedia content can now provide a very advanced level of realism. The boundary between real and synthetic media has become very thin. On the one hand, this opens the door to a series of exciting applications in different fields such as creative arts, advertising, film production, and video games. On the other hand, it poses enormous security threats. Software packages freely available on the web allow any individual, without special skills, to create very realistic fake images and videos. These can be used to manipulate public opinion during elections, commit fraud, discredit or blackmail people. Therefore, there is an urgent need for automated tools capable of detecting false multimedia content and avoiding the spread of dangerous false information. This review paper aims to present an analysis of the methods for visual media integrity verification, that is, the detection of manipulated images and videos. Special emphasis will be placed on the emerging phenomenon of deepfakes, fake media created through deep learning tools, and on modern data-driven forensic methods to fight them. The analysis will help highlight the limits of current forensic tools, the most relevant issues, the upcoming challenges, and suggest future directions for research.
Guo, T., Zhou, R., Tian, C..  2020.  On the Information Leakage in Private Information Retrieval Systems. IEEE Transactions on Information Forensics and Security. 15:2999—3012.
We consider information leakage to the user in private information retrieval (PIR) systems. Information leakage can be measured in terms of individual message leakage or total leakage. Individual message leakage, or simply individual leakage, is defined as the amount of information that the user can obtain on any individual message that is not being requested, and the total leakage is defined as the amount of information that the user can obtain about all the other messages except the one being requested. In this work, we characterize the tradeoff between the minimum download cost and the individual leakage, and that for the total leakage, respectively. Coding schemes are proposed to achieve these optimal tradeoffs, which are also shown to be optimal in terms of the message size. We further characterize the optimal tradeoff between the minimum amount of common randomness and the total leakage. Moreover, we show that under individual leakage, common randomness is in fact unnecessary when there are more than two messages.
Al-Dhaqm, A., Razak, S. A., Dampier, D. A., Choo, K. R., Siddique, K., Ikuesan, R. A., Alqarni, A., Kebande, V. R..  2020.  Categorization and Organization of Database Forensic Investigation Processes. IEEE Access. 8:112846—112858.
Database forensic investigation (DBFI) is an important area of research within digital forensics. It's importance is growing as digital data becomes more extensive and commonplace. The challenges associated with DBFI are numerous, and one of the challenges is the lack of a harmonized DBFI process for investigators to follow. In this paper, therefore, we conduct a survey of existing literature with the hope of understanding the body of work already accomplished. Furthermore, we build on the existing literature to present a harmonized DBFI process using design science research methodology. This harmonized DBFI process has been developed based on three key categories (i.e. planning, preparation and pre-response, acquisition and preservation, and analysis and reconstruction). Furthermore, the DBFI has been designed to avoid confusion or ambiguity, as well as providing practitioners with a systematic method of performing DBFI with a higher degree of certainty.
Yang, Z., Sun, Q., Zhang, Y., Zhu, L., Ji, W..  2020.  Inference of Suspicious Co-Visitation and Co-Rating Behaviors and Abnormality Forensics for Recommender Systems. IEEE Transactions on Information Forensics and Security. 15:2766—2781.
The pervasiveness of personalized collaborative recommender systems has shown the powerful capability in a wide range of E-commerce services such as Amazon, TripAdvisor, Yelp, etc. However, fundamental vulnerabilities of collaborative recommender systems leave space for malicious users to affect the recommendation results as the attackers desire. A vast majority of existing detection methods assume certain properties of malicious attacks are given in advance. In reality, improving the detection performance is usually constrained due to the challenging issues: (a) various types of malicious attacks coexist, (b) limited representations of malicious attack behaviors, and (c) practical evidences for exploring and spotting anomalies on real-world data are scarce. In this paper, we investigate a unified detection framework in an eye for an eye manner without being bothered by the details of the attacks. Firstly, co-visitation and co-rating graphs are constructed using association rules. Then, attribute representations of nodes are empirically developed from the perspectives of linkage pattern, structure-based property and inherent association of nodes. Finally, both attribute information and connective coherence of graph are combined in order to infer suspicious nodes. Extensive experiments on both synthetic and real-world data demonstrate the effectiveness of the proposed detection approach compared with competing benchmarks. Additionally, abnormality forensics metrics including distribution of rating intention, time aggregation of suspicious ratings, degree distributions before as well as after removing suspicious nodes and time series analysis of historical ratings, are provided so as to discover interesting findings such as suspicious nodes (items or ratings) on real-world data.
Yaseen, Q., Panda, B..  2012.  Tackling Insider Threat in Cloud Relational Databases. 2012 IEEE Fifth International Conference on Utility and Cloud Computing. :215—218.
Cloud security is one of the major issues that worry individuals and organizations about cloud computing. Therefore, defending cloud systems against attacks such asinsiders' attacks has become a key demand. This paper investigates insider threat in cloud relational database systems(cloud RDMS). It discusses some vulnerabilities in cloud computing structures that may enable insiders to launch attacks, and shows how load balancing across multiple availability zones may facilitate insider threat. To prevent such a threat, the paper suggests three models, which are Peer-to-Peer model, Centralized model and Mobile-Knowledgebase model, and addresses the conditions under which they work well.
Igbe, O., Saadawi, T..  2018.  Insider Threat Detection using an Artificial Immune system Algorithm. 2018 9th IEEE Annual Ubiquitous Computing, Electronics Mobile Communication Conference (UEMCON). :297—302.
Insider threats result from legitimate users abusing their privileges, causing tremendous damage or losses. Malicious insiders can be the main threats to an organization. This paper presents an anomaly detection system for detecting insider threat activities in an organization using an ensemble that consists of negative selection algorithms (NSA). The proposed system classifies a selected user activity into either of two classes: "normal" or "malicious." The effectiveness of our proposed detection system is evaluated using case studies from the computer emergency response team (CERT) synthetic insider threat dataset. Our results show that the proposed method is very effective in detecting insider threats.
Zhang, T., Zhao, P..  2010.  Insider Threat Identification System Model Based on Rough Set Dimensionality Reduction. 2010 Second World Congress on Software Engineering. 2:111—114.
Insider threat makes great damage to the security of information system, traditional security methods are extremely difficult to work. Insider attack identification plays an important role in insider threat detection. Monitoring user's abnormal behavior is an effective method to detect impersonation, this method is applied to insider threat identification, to built user's behavior attribute information database based on weights changeable feedback tree augmented Bayes network, but data is massive, using the dimensionality reduction based on rough set, to establish the process information model of user's behavior attribute. Using the minimum risk Bayes decision can effectively identify the real identity of the user when user's behavior departs from the characteristic model.
Althebyan, Q..  2019.  A Mobile Edge Mitigation Model for Insider Threats: A Knowledgebase Approach. 2019 International Arab Conference on Information Technology (ACIT). :188—192.
Taking care of security at the cloud is a major issue that needs to be carefully considered and solved for both individuals as well as organizations. Organizations usually expect more trust from employees as well as customers in one hand. On the other hand, cloud users expect their private data is maintained and secured. Although this must be case, however, some malicious outsiders of the cloud as well as malicious insiders who are cloud internal users tend to disclose private data for their malicious uses. Although outsiders of the cloud should be a concern, however, the more serious problems come from Insiders whose malicious actions are more serious and sever. Hence, insiders' threats in the cloud should be the top most problem that needs to be tackled and resolved. This paper aims to find a proper solution for the insider threat problem in the cloud. The paper presents a Mobile Edge Computing (MEC) mitigation model as a solution that suits the specialized nature of this problem where the solution needs to be very close to the place where insiders reside. This in fact gives real-time responses to attack, and hence, reduces the overhead in the cloud.
Roy, P., Mazumdar, C..  2018.  Modeling of Insider Threat using Enterprise Automaton. 2018 Fifth International Conference on Emerging Applications of Information Technology (EAIT). :1—4.
Substantial portions of attacks on the security of enterprises are perpetrated by Insiders having authorized privileges. Thus insider threat and attack detection is an important aspect of Security management. In the published literature, efforts are on to model the insider threats based on the behavioral traits of employees. The psycho-social behaviors are hard to encode in the software systems. Also, in some cases, there are privacy issues involved. In this paper, the human and non-human agents in a system are described in a novel unified model. The enterprise is described as an automaton and its states are classified secure, safe, unsafe and compromised. The insider agents and threats are modeled on the basis of the automaton and the model is validated using a case study.
Zhang, H., Ma, J., Wang, Y., Pei, Q..  2009.  An Active Defense Model and Framework of Insider Threats Detection and Sense. 2009 Fifth International Conference on Information Assurance and Security. 1:258—261.
Insider attacks is a well-known problem acknowledged as a threat as early as 1980s. The threat is attributed to legitimate users who take advantage of familiarity with the computational environment and abuse their privileges, can easily cause significant damage or losses. In this paper, we present an active defense model and framework of insider threat detection and sense. Firstly, we describe the hierarchical framework which deal with insider threat from several aspects, and subsequently, show a hierarchy-mapping based insider threats model, the kernel of the threats detection, sense and prediction. The experiments show that the model and framework could sense the insider threat in real-time effectively.