Visible to the public Biblio

Filters: Keyword is Feeds  [Clear All Filters]
2020-04-13
Horne, Benjamin D., Gruppi, Mauricio, Adali, Sibel.  2019.  Trustworthy Misinformation Mitigation with Soft Information Nudging. 2019 First IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications (TPS-ISA). :245–254.
Research in combating misinformation reports many negative results: facts may not change minds, especially if they come from sources that are not trusted. Individuals can disregard and justify lies told by trusted sources. This problem is made even worse by social recommendation algorithms which help amplify conspiracy theories and information confirming one's own biases due to companies' efforts to optimize for clicks and watch time over individuals' own values and public good. As a result, more nuanced voices and facts are drowned out by a continuous erosion of trust in better information sources. Most misinformation mitigation techniques assume that discrediting, filtering, or demoting low veracity information will help news consumers make better information decisions. However, these negative results indicate that some news consumers, particularly extreme or conspiracy news consumers will not be helped. We argue that, given this background, technology solutions to combating misinformation should not simply seek facts or discredit bad news sources, but instead use more subtle nudges towards better information consumption. Repeated exposure to such nudges can help promote trust in better information sources and also improve societal outcomes in the long run. In this article, we will talk about technological solutions that can help us in developing such an approach, and introduce one such model called Trust Nudging.
2020-02-17
Skopik, Florian, Filip, Stefan.  2019.  Design principles for national cyber security sensor networks: Lessons learned from small-scale demonstrators. 2019 International Conference on Cyber Security and Protection of Digital Services (Cyber Security). :1–8.
The timely exchange of information on new threats and vulnerabilities has become a cornerstone of effective cyber defence in recent years. Especially national authorities increasingly assume their role as information brokers through national cyber security centres and distribute warnings on new attack vectors and vital recommendations on how to mitigate them. Although many of these initiatives are effective to some degree, they also suffer from severe limitations. Many steps in the exchange process require extensive human involvement to manually review, vet, enrich, analyse and distribute security information. Some countries have therefore started to adopt distributed cyber security sensor networks to enable the automatic collection, analysis and preparation of security data and thus effectively overcome limiting scalability factors. The basic idea of IoC-centric cyber security sensor networks is that the national authorities distribute Indicators of Compromise (IoCs) to organizations and receive sightings in return. This effectively helps them to estimate the spreading of malware, anticipate further trends of spreading and derive vital findings for decision makers. While this application case seems quite simple, there are some tough questions to be answered in advance, which steer the further design decisions: How much can the monitored organization be trusted to be a partner in the search for malware? How much control of the scanning process should be delegated to the organization? What is the right level of search depth? How to deal with confidential indicators? What can be derived from encrypted traffic? How are new indicators distributed, prioritized, and scan targets selected in a scalable manner? What is a good strategy to re-schedule scans to derive meaningful data on trends, such as rate of spreading? This paper suggests a blueprint for a sensor network and raises related questions, outlines design principles, and discusses lessons learned from small-scale pilots.
2020-02-10
Elakkiya, E, Selvakumar, S.  2019.  Initial Weights Optimization Using Enhanced Step Size Firefly Algorithm for Feed Forward Neural Network Applied to Spam Detection. TENCON 2019 - 2019 IEEE Region 10 Conference (TENCON). :942–946.

Spams are unsolicited and unnecessary messages which may contain harmful codes or links for activation of malicious viruses and spywares. Increasing popularity of social networks attracts the spammers to perform malicious activities in social networks. So an efficient spam detection method is necessary for social networks. In this paper, feed forward neural network with back propagation based spam detection model is proposed. The quality of the learning process is improved by tuning initial weights of feed forward neural network using proposed enhanced step size firefly algorithm which reduces the time for finding optimal weights during the learning process. The model is applied for twitter dataset and the experimental results show that, the proposed model performs well in terms of accuracy and detection rate and has lower false positive rate. 

2020-01-27
Kala, T. Sree, Christy, A..  2019.  An Intrusion Detection System using Opposition based Particle Swarm Optimization Algorithm and PNN. 2019 International Conference on Machine Learning, Big Data, Cloud and Parallel Computing (COMITCon). :184–188.
Network security became a viral topic nowadays, Anomaly-based Intrusion Detection Systems [1] (IDSs) plays an indispensable role in identifying the attacks from networks and the detection rate and accuracy are said to be high. The proposed work explore this topic and solve this issue by the IDS model developed using Artificial Neural Network (ANN). This model uses Feed - Forward Neural Net algorithms and Probabilistic Neural Network and oppositional based on Particle Swarm optimization Algorithm for lessen the computational overhead and boost the performance level. The whole computing overhead produced in its execution and training are get minimized by the various optimization techniques used in these developed ANN-based IDS system. The experimental study on the developed system tested using the standard NSL-KDD dataset performs well, while compare with other intrusion detection models, built using NN, RB and OPSO algorithms.
2019-06-10
Sokolov, A. N., Pyatnitsky, I. A., Alabugin, S. K..  2018.  Research of Classical Machine Learning Methods and Deep Learning Models Effectiveness in Detecting Anomalies of Industrial Control System. 2018 Global Smart Industry Conference (GloSIC). :1-6.

Modern industrial control systems (ICS) act as victims of cyber attacks more often in last years. These attacks are hard to detect and their consequences can be catastrophic. Cyber attacks can cause anomalies in the work of the ICS and its technological equipment. The presence of mutual interference and noises in this equipment significantly complicates anomaly detection. Moreover, the traditional means of protection, which used in corporate solutions, require updating with each change in the structure of the industrial process. An approach based on the machine learning for anomaly detection was used to overcome these problems. It complements traditional methods and allows one to detect signal correlations and use them for anomaly detection. Additional Tennessee Eastman Process Simulation Data for Anomaly Detection Evaluation dataset was analyzed as example of industrial process. In the course of the research, correlations between the signals of the sensors were detected and preliminary data processing was carried out. Algorithms from the most common techniques of machine learning (decision trees, linear algorithms, support vector machines) and deep learning models (neural networks) were investigated for industrial process anomaly detection task. It's shown that linear algorithms are least demanding on computational resources, but they don't achieve an acceptable result and allow a significant number of errors. Decision tree-based algorithms provided an acceptable accuracy, but the amount of RAM, required for their operations, relates polynomially with the training sample volume. The deep neural networks provided the greatest accuracy, but they require considerable computing power for internal calculations.

2019-03-04
Kannavara, R., Vangore, J., Roberts, W., Lindholm, M., Shrivastav, P..  2018.  Automating Threat Intelligence for SDL. 2018 IEEE Cybersecurity Development (SecDev). :137–137.
Threat intelligence is very important in order to execute a well-informed Security Development Lifecycle (SDL). Although there are many readily available solutions supporting tactical threat intelligence focusing on enterprise Information Technology (IT) infrastructure, the lack of threat intelligence solutions focusing on SDL is a known gap which is acknowledged by the security community. To address this shortcoming, we present a solution to automate the process of mining open source threat information sources to deliver product specific threat indicators designed to strategically inform the SDL while continuously monitoring for disclosures of relevant potential vulnerabilities during product design, development, and beyond deployment.
2018-05-01
Korczynski, M., Tajalizadehkhoob, S., Noroozian, A., Wullink, M., Hesselman, C., v Eeten, M..  2017.  Reputation Metrics Design to Improve Intermediary Incentives for Security of TLDs. 2017 IEEE European Symposium on Security and Privacy (EuroS P). :579–594.

Over the years cybercriminals have misused the Domain Name System (DNS) - a critical component of the Internet - to gain profit. Despite this persisting trend, little empirical information about the security of Top-Level Domains (TLDs) and of the overall 'health' of the DNS ecosystem exists. In this paper, we present security metrics for this ecosystem and measure the operational values of such metrics using three representative phishing and malware datasets. We benchmark entire TLDs against the rest of the market. We explicitly distinguish these metrics from the idea of measuring security performance, because the measured values are driven by multiple factors, not just by the performance of the particular market player. We consider two types of security metrics: occurrence of abuse and persistence of abuse. In conjunction, they provide a good understanding of the overall health of a TLD. We demonstrate that attackers abuse a variety of free services with good reputation, affecting not only the reputation of those services, but of entire TLDs. We find that, when normalized by size, old TLDs like .com host more bad content than new generic TLDs. We propose a statistical regression model to analyze how the different properties of TLD intermediaries relate to abuse counts. We find that next to TLD size, abuse is positively associated with domain pricing (i.e. registries who provide free domain registrations witness more abuse). Last but not least, we observe a negative relation between the DNSSEC deployment rate and the count of phishing domains.

2018-04-30
Korczynski, M., Tajalizadehkhoob, S., Noroozian, A., Wullink, M., Hesselman, C., v Eeten, M..  2017.  Reputation Metrics Design to Improve Intermediary Incentives for Security of TLDs. 2017 IEEE European Symposium on Security and Privacy (EuroS P). :579–594.

Over the years cybercriminals have misused the Domain Name System (DNS) - a critical component of the Internet - to gain profit. Despite this persisting trend, little empirical information about the security of Top-Level Domains (TLDs) and of the overall 'health' of the DNS ecosystem exists. In this paper, we present security metrics for this ecosystem and measure the operational values of such metrics using three representative phishing and malware datasets. We benchmark entire TLDs against the rest of the market. We explicitly distinguish these metrics from the idea of measuring security performance, because the measured values are driven by multiple factors, not just by the performance of the particular market player. We consider two types of security metrics: occurrence of abuse and persistence of abuse. In conjunction, they provide a good understanding of the overall health of a TLD. We demonstrate that attackers abuse a variety of free services with good reputation, affecting not only the reputation of those services, but of entire TLDs. We find that, when normalized by size, old TLDs like .com host more bad content than new generic TLDs. We propose a statistical regression model to analyze how the different properties of TLD intermediaries relate to abuse counts. We find that next to TLD size, abuse is positively associated with domain pricing (i.e. registries who provide free domain registrations witness more abuse). Last but not least, we observe a negative relation between the DNSSEC deployment rate and the count of phishing domains.

2018-02-28
Boyarinov, K., Hunter, A..  2017.  Security and trust for surveillance cameras. 2017 IEEE Conference on Communications and Network Security (CNS). :384–385.

We address security and trust in the context of a commercial IP camera. We take a hands-on approach, as we not only define abstract vulnerabilities, but we actually implement the attacks on a real camera. We then discuss the nature of the attacks and the root cause; we propose a formal model of trust that can be used to address the vulnerabilities by explicitly constraining compositionality for trust relationships.

2018-02-06
Ssin, S. Y., Zucco, J. E., Walsh, J. A., Smith, R. T., Thomas, B. H..  2017.  SONA: Improving Situational Awareness of Geotagged Information Using Tangible Interfaces. 2017 International Symposium on Big Data Visual Analytics (BDVA). :1–8.

This paper introduces SONA (Spatiotemporal system Organized for Natural Analysis), a tabletop and tangible controller system for exploring geotagged information, and more specifically, CCTV. SONA's goal is to support a more natural method of interacting with data. Our new interactions are placed in the context of a physical security environment, closed circuit television (CCTV). We present a three-layered detail on demand set of view filters for CCTV feeds on a digital map. These filters are controlled with a novel tangible device for direct interaction. We validate SONA's tangible controller approach with a user study comparing SONA with the existing CCTV multi-screen method. The results of the study show that SONA's tangible interaction method is superior to the multi-screen approach, both in terms of quantitative results, and is preferred by users.

2015-05-06
Boukhtouta, A., Lakhdari, N.-E., Debbabi, M..  2014.  Inferring Malware Family through Application Protocol Sequences Signature. New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on. :1-5.

The dazzling emergence of cyber-threats exert today's cyberspace, which needs practical and efficient capabilities for malware traffic detection. In this paper, we propose an extension to an initial research effort, namely, towards fingerprinting malicious traffic by putting an emphasis on the attribution of maliciousness to malware families. The proposed technique in the previous work establishes a synergy between automatic dynamic analysis of malware and machine learning to fingerprint badness in network traffic. Machine learning algorithms are used with features that exploit only high-level properties of traffic packets (e.g. packet headers). Besides, the detection of malicious packets, we want to enhance fingerprinting capability with the identification of malware families responsible in the generation of malicious packets. The identification of the underlying malware family is derived from a sequence of application protocols, which is used as a signature to the family in question. Furthermore, our results show that our technique achieves promising malware family identification rate with low false positives.

Boukhtouta, A., Lakhdari, N.-E., Debbabi, M..  2014.  Inferring Malware Family through Application Protocol Sequences Signature. New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on. :1-5.

The dazzling emergence of cyber-threats exert today's cyberspace, which needs practical and efficient capabilities for malware traffic detection. In this paper, we propose an extension to an initial research effort, namely, towards fingerprinting malicious traffic by putting an emphasis on the attribution of maliciousness to malware families. The proposed technique in the previous work establishes a synergy between automatic dynamic analysis of malware and machine learning to fingerprint badness in network traffic. Machine learning algorithms are used with features that exploit only high-level properties of traffic packets (e.g. packet headers). Besides, the detection of malicious packets, we want to enhance fingerprinting capability with the identification of malware families responsible in the generation of malicious packets. The identification of the underlying malware family is derived from a sequence of application protocols, which is used as a signature to the family in question. Furthermore, our results show that our technique achieves promising malware family identification rate with low false positives.

2015-05-05
Falcon, R., Abielmona, R., Billings, S., Plachkov, A., Abbass, H..  2014.  Risk management with hard-soft data fusion in maritime domain awareness. Computational Intelligence for Security and Defense Applications (CISDA), 2014 Seventh IEEE Symposium on. :1-8.

Enhanced situational awareness is integral to risk management and response evaluation. Dynamic systems that incorporate both hard and soft data sources allow for comprehensive situational frameworks which can supplement physical models with conceptual notions of risk. The processing of widely available semi-structured textual data sources can produce soft information that is readily consumable by such a framework. In this paper, we augment the situational awareness capabilities of a recently proposed risk management framework (RMF) with the incorporation of soft data. We illustrate the beneficial role of the hard-soft data fusion in the characterization and evaluation of potential vessels in distress within Maritime Domain Awareness (MDA) scenarios. Risk features pertaining to maritime vessels are defined a priori and then quantified in real time using both hard (e.g., Automatic Identification System, Douglas Sea Scale) as well as soft (e.g., historical records of worldwide maritime incidents) data sources. A risk-aware metric to quantify the effectiveness of the hard-soft fusion process is also proposed. Though illustrated with MDA scenarios, the proposed hard-soft fusion methodology within the RMF can be readily applied to other domains.
 

2015-05-04
Boukhtouta, A., Lakhdari, N.-E., Debbabi, M..  2014.  Inferring Malware Family through Application Protocol Sequences Signature. New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on. :1-5.

The dazzling emergence of cyber-threats exert today's cyberspace, which needs practical and efficient capabilities for malware traffic detection. In this paper, we propose an extension to an initial research effort, namely, towards fingerprinting malicious traffic by putting an emphasis on the attribution of maliciousness to malware families. The proposed technique in the previous work establishes a synergy between automatic dynamic analysis of malware and machine learning to fingerprint badness in network traffic. Machine learning algorithms are used with features that exploit only high-level properties of traffic packets (e.g. packet headers). Besides, the detection of malicious packets, we want to enhance fingerprinting capability with the identification of malware families responsible in the generation of malicious packets. The identification of the underlying malware family is derived from a sequence of application protocols, which is used as a signature to the family in question. Furthermore, our results show that our technique achieves promising malware family identification rate with low false positives.