Visible to the public Expert Systems Security 2015

SoS Newsletter- Advanced Book Block


SoS Logo

Expert Systems Security


Expert systems have the potential for efficiency, scalability, and economy in systems security. The research work cited here looks at a range of systems including SCADA, Internet of Things, and other cyber physical systems. The works address scalability, resilience, and measurement.

Rani, C.; Goel, S., “CSAAES: An Expert System for Cyber Security Attack Awareness,” in Computing, Communication & Automation (ICCCA), 2015 International Conference on, vol., no., pp. 242–245, 15–16 May 2015. doi:10.1109/CCAA.2015.7148381
Abstract: Internet today is used by almost all the people, organizations etc. With this vast usage of internet, a lot of information is exposed on the internet. This information is available to the hackers. So a lot of attacks occur in the computer systems through internet. These attacks may destroy the information present on a particular system or use the system to perform other type of attacks. We need to provide protection from these attacks. User faces some problem in the functioning of the computer but has no means of identifying and solving the problems. Knowledge about different type of attacks and their effects on the system is available from various sources. The handling of various attacks is also available, but the way to identify which attack is being performed on the computer system is difficult. The expert system designed here can identify which type of attack is being performed on the system, their symptoms and ways to solve these attacks i.e. countermeasures. It is a platform for cyber attacks security awareness among internet users.
Keywords: Internet; expert systems; security of data; CSAAES expert system; Internet; cyber security attack awareness; information exposure; Computer crime; Computers; Expert systems; Internet; Software; attacks; countermeasures; expert system; security; security framework (ID#: 15-7382)


Yost, J., “The March of IDES: A History of the Intrusion Detection Expert System,” in IEEE Annals of the History of Computing, vol. PP, no. 99, pp. 1–1, 13 July 2015. doi:10.1109/MAHC.2015.41
Abstract: This paper examines the pre-history and history of early intrusion detection expert systems by focusing the first such system, Intrusion Detection Expert System, or IDES, which was developed in the second half of the 1980s at SRI International (and SRI’s follow-on Next Generation Intrusion Detection Expert System, or NIDES, in the early-to-mid 1990s). It also presents and briefly analyzes the outsized contribution of women scientists to leadership of this area of computer security research and development, contrasting it with the largely male-led work on “high-assurance” operating system design, development, and standard-setting.
Keywords: Communities; Computer security; Computers; Expert systems; History; Intrusion detection (ID#: 15-7383)


Neelam, Sahil; Sood, Sandeep; Mehmi, Sandeep; Dogra, Shikha., “Artificial Intelligence for Designing User Profiling System for Cloud Computing Security: Experiment,” in Computer Engineering and Applications (ICACEA), 2015 International Conference on Advances in, pp. 51–58, 19–20 March 2015. doi:10.1109/ICACEA.2015.7164645
Abstract: In Cloud Computing security, the existing mechanisms: Anti-virus programs, Authentications, Firewalls are not able to withstand the dynamic nature of threats. So, User Profiling System, which registers user’s activities to analyze user’s behavior, augments the security system to work in proactive and reactive manner and provides an enhanced security. This paper focuses on designing a User Profiling System for Cloud environment using Artificial Intelligence techniques and studies behavior (of User Profiling System) and proposes a new hybrid approach, which will deliver a comprehensive User Profiling System for Cloud Computing security.
Keywords: artificial intelligence; authorisation; cloud computing; firewalls; antivirus programs; artificial intelligence techniques; authentications; cloud computing security; cloud environment; proactive manner; reactive manner; user activities; user behavior; user profiling system; Artificial intelligence; Cloud computing; Computational modeling; Fuzzy logic; Fuzzy systems; Genetic algorithms; Security; Artificial Intelligence; Artificial Neural Networks; Cloud Computing; Datacenters; Expert Systems; Genetics; Machine Learning; Multi-tenancy; Networking Systems; Pay-as-you-go Model (ID#: 15-7384)


Li Zeng Xin; Rong Xin Yan, “Accounting Information System Risk Assessment Algorithm Based on Analytic Hierarchy Process,” in Measuring Technology and Mechatronics Automation (ICMTMA), 2015 Seventh International Conference on, vol., no., pp. 72–75, 13–14 June 2015. doi:10.1109/ICMTMA.2015.25
Abstract: So far, there is little research on accounting information system risk assessment in our country. In order to provide the basis to meet the security needs of the accounting information system and reduce the risk of accounting information system security, reduce financial loses and improve the work efficiency, a model of enterprise accounting information system risk assessment method based on Analytic Hierarchy Process is proposed. The analytic hierarchy process model is applied to one corporate accounting information system for risk assessment. It can be concluded that the proposed method get better result of risk assessment, and have strong operability and effectiveness of risk assessment in the accounting information system for enterprise.
Keywords: accounting; analytic hierarchy process; information systems; risk management; security of data; accounting information system security needs; analytic hierarchy process model; corporate accounting information system; enterprise accounting information system risk assessment method; financial loss; work efficiency; Analytic hierarchy process; Expert systems; Indexes; Risk management; Security; Software; Analytic Hierarchy Process; accounting information system; risk assessment (ID#: 15-7385)


Halim, Shamimi A.; Annamalai, Muthukkaruppan; Ahmad, Mohd Sharifuddin; Ahmad, Rashidi, “Domain Expert Maintainable Inference Knowledge of Assessment Task,” in IT Convergence and Security (ICITCS), 2015 5th International Conference on, vol., no., pp. 1–5, 24–27 Aug. 2015. doi:10.1109/ICITCS.2015.7292974
Abstract: Inference and domain knowledge are the foundation of a Knowledge-based System (KBS). Inference knowledge describes the steps or rules used to perform a task inference; making reference to the domain knowledge that is used. The inference knowledge is typically acquired from the domain experts and communicated to the system developers to be implemented in a KBS. The explicit representation of inference knowledge eases the maintenance of the evolving knowledge. However, the involvements of the knowledge engineers and software developers during the maintenance phase give cause to several problems during the system’s life-cycle. In this paper, we provide a possible way of using rule templates to abstract away the inference knowledge to higher conceptual categories that are amenable to domain experts. Backed by a rule editing user-interface that is designed to instantiate the rule templates, the responsibility to maintain the inference knowledge can be assigned to the domain experts, i.e., the originators of the knowledge. The paper demonstrates the feasibility of the idea by making a case of inference knowledge applied to assessment task such as triage decision making. Five rule templates to represent the inference knowledge of assessment tasks are proposed. We validated the rule templates through case studies in several domains and task, as well as through usability testing.
Keywords: Biological system modeling; Decision making; Expert systems; Knowledge engineering; Maintenance engineering; Medical services (ID#: 15-7386)


Rummukainen, L.; Oksama, L.; Timonen, J.; Vankka, J., “Situation Awareness Requirements for a Critical Infrastructure Monitoring Operator,” in Technologies for Homeland Security (HST), 2015 IEEE International Symposium on, vol., no., pp. 1–6, 14–16 April 2015. doi:10.1109/THS.2015.7225326
Abstract: This paper presents a set of situation awareness (SA) requirements for an operator who monitors critical infrastructure (CI). The requirements consist of pieces of information that the operator needs in order to be successful in their work. The purpose of this research was to define a common requirement base that can be used when designing a CI monitoring system or a user interface to support SA. The requirements can also be used during system or user interface evaluation, and as a guide for what aspects to emphasize when training new CI monitoring operators. To create the SA requirements, goal-directed task analysis (GDTA) was conducted. For GDTA, nine interview sessions were held during the research. For a clear understanding of a CI monitoring operator’s work, all interviewees were subject matter experts (SMEs) and had extensive experience in CI monitoring. Before the interviews, a day-long observation session was conducted to gather initial input for the GDTA goal hierarchy and the SA requirements. GDTA identified three goals an operator must achieve in order to be successful in their work, and they were used to define the final SA requirements. As a result, a hierarchy diagram was constructed that includes three goals: monitoring, analysis and internal communication, and external communication. The SA requirements for a CI monitoring operator include information regarding ongoing incidents in the environment and the state of systems and services in the operator’s organization.
Keywords: critical infrastructures; expert systems; task analysis; user interfaces; CI monitoring operator; CI monitoring system; GDTA goal hierarchy; critical infrastructure monitoring operator; goal-directed task analysis; hierarchy diagram; observation session; requirement base; situation awareness requirement; subject matter expert; user interface evaluation; Context; Industries; Interviews; Monitoring; Organizations; Security; User interfaces (ID#: 15-7387)


Esmaily, Jamal; Moradinezhad, Reza; Ghasemi, Jamal, “Intrusion Detection System Based on Multi-Layer Perceptron Neural Networks and Decision Tree,” in Information and Knowledge Technology (IKT), 2015 7th Conference on, vol., no., pp. 1–5, 26–28 May 2015. doi:10.1109/IKT.2015.7288736
Abstract: The growth of internet attacks is a major problem for today’s computer networks. Hence, implementing security methods to prevent such attacks is crucial for any computer network. With the help of Machine Learning and Data Mining techniques, Intrusion Detection Systems (IDS) are able to diagnose attacks and system anomalies more effectively. Though, most of the studied methods in this field, including Rule-based expert systems, are not able to successfully identify the attacks which have different patterns from expected ones. By using Artificial Neural Networks (ANNs), it is possible to identify the attacks and classify the data, even when the dataset is nonlinear, limited, or incomplete. In this paper, a method based on the combination of Decision Tree (DT) algorithm and Multi-Layer Perceptron (MLP) ANN is proposed which is able to identify attacks with high accuracy and reliability.
Keywords: Algorithm design and analysis; Classification algorithms; Clustering algorithms; Decision trees; Intrusion detection; Neural networks; Support vector machines; Decision Tree; Intrusion Detection Systems; Machine Learning; Neural Networks (ID#: 15-7388)


Desnitsky, V.A.; Kotenko, I.V.; Nogin, S.B., “Detection of Anomalies in Data for Monitoring of Security Components in the Internet of Things,” in Soft Computing and Measurements (SCM), 2015 XVIII International Conference on, vol., no., pp. 189–192, 19–21 May 2015. doi:10.1109/SCM.2015.7190452
Abstract: The increasing urgency and expansion of information systems implementing the Internet of Things (IoT) concept determine the importance of the investigation in the field of protection mechanisms against a wide range of information security threats. The increased complexity of such investigation is determined by a low structuring and formalization of expert knowledge on IoT systems. The paper encompasses an approach to elicitation and use of expert knowledge on detection of anomalies in data as well as their usage as an input for automated means aimed at monitoring security components of IoT.
Keywords: Internet of Things; information systems; monitoring; security of data;  IoT concept; data anomalies; information security threats; information systems; security components; Intelligent sensors; Monitoring; Security; Sensor systems; Software; Temperature sensors; IoT system testing; anomaly detection; expert knowledge; information security; internet of things (ID#: 15-7389)


Baker, T.; Mackay, M.; Shaheed, A.; Aldawsari, B., “Security-Oriented Cloud Platform for SOA-Based SCADA,” in Cluster, Cloud and Grid Computing (CCGrid), 2015 15th IEEE/ACM International Symposium on, vol., no., pp. 961–970, 4–7 May 2015. doi:10.1109/CCGrid.2015.37
Abstract: During the last 10 years, experts in critical infrastructure security have been increasingly directing their focus and attention to the security of control structures such as Supervisory Control and Data Acquisition (SCADA) systems in the light of the move toward Internet-connected architectures. However, this more open architecture has resulted in an increasing level of risk being faced by these systems, especially as they became offered as services and utilised via Service Oriented Architectures (SOA). For example, the SOA-based SCADA architecture proposed by the AESOP project concentrated on facilitating the integration of SCADA systems with distributed services on the application layer of a cloud network. However, whilst each service specified various security goals, such as authorisation and authentication, the current AESOP model does not attempt to encompass all the necessary security requirements and features of the integrated services. This paper presents a concept for an innovative integrated cloud platform to reinforce the integrity and security of SOA-based SCADA systems that will apply in the context of Critical Infrastructures to identify the core requirements, components and features of these types of system. The paper uses the SmartGrid to highlight the applicability and importance of the proposed platform in a real world scenario.
Keywords: SCADA systems; cloud computing; critical infrastructures; distributed processing; security of data; service-oriented architecture; SCADA; SOA; cloud network; critical infrastructure security; distributed service; security-oriented cloud platform; service oriented architecture; supervisory control and data acquisition; Authorization; Cloud computing; Computer architecture; Monitoring; SCADA Service-oriented architecture; Cloud Computing; Critical Infrastructure; (ID#: 15-7390)


Yamaguchi, F.; Maier, A.; Gascon, H.; Rieck, K., “Automatic Inference of Search Patterns for Taint-Style Vulnerabilities,” in Security and Privacy (SP), 2015 IEEE Symposium on, vol., no., pp. 797–812, 17–21 May 2015. doi:10.1109/SP.2015.54
Abstract: Taint-style vulnerabilities are a persistent problem in software development, as the recently discovered “Heart bleed” vulnerability strikingly illustrates. In this class of vulnerabilities, attacker-controlled data is passed unsanitized from an input source to a sensitive sink. While simple instances of this vulnerability class can be detected automatically, more subtle defects involving data flow across several functions or project-specific APIs are mainly discovered by manual auditing. Different techniques have been proposed to accelerate this process by searching for typical patterns of vulnerable code. However, all of these approaches require a security expert to manually model and specify appropriate patterns in practice. In this paper, we propose a method for automatically inferring search patterns for taint-style vulnerabilities in C code. Given a security-sensitive sink, such as a memory function, our method automatically identifies corresponding source-sink systems and constructs patterns that model the data flow and sanitization in these systems. The inferred patterns are expressed as traversals in a code property graph and enable efficiently searching for unsanitized data flows -- across several functions as well as with project-specific APIs. We demonstrate the efficacy of this approach in different experiments with 5 open-source projects. The inferred search patterns reduce the amount of code to inspect for finding known vulnerabilities by 94.9% and also enable us to uncover 8 previously unknown vulnerabilities.
Keywords: application program interfaces; data flow analysis; public domain software; security of data; software engineering; C code; attacker-controlled data; automatic inference; code property graph; data flow; data security; inferred search pattern; memory function; open-source project; project-specific API; search pattern; security-sensitive sink; sensitive sink; software development; source-sink system; taint-style vulnerability; Databases; Libraries; Payloads; Programming; Security; Software; Syntactics; Clustering; Graph Databases; Vulnerabilities (ID#: 15-7391)


Younis, A.A.; Malaiya, Y.K., “Comparing and Evaluating CVSS Base Metrics and Microsoft Rating System,” in Software Quality, Reliability and Security (QRS), 2015 IEEE International Conference on, vol., no., pp. 252–261, 3–5 Aug. 2015. doi:10.1109/QRS.2015.44
Abstract: Evaluating the accuracy of vulnerability security risk metrics is important because incorrectly assessing a vulnerability to be more critical could lead to a waste of limited resources available and ignoring a vulnerability incorrectly assessed as not critical could lead to a breach with a high impact. In this paper, we compare and evaluate the performance of the CVSS Base metrics and Microsoft Rating system. The CVSS Base metrics are the de facto standard that is currently used to measure the severity of individual vulnerabilities. The Microsoft Rating system developed by Microsoft has been used for some of the most widely used systems. Microsoft software vulnerabilities have been assessed by both the Microsoft metrics and the CVSS Base metrics which makes their comparison feasible. The two approaches, the technical analysis approach (Microsoft) and the expert opinions approach (CVSS) differ significantly. To conduct this study, we examine 813 vulnerabilities of Internet Explorer and Windows 7. The two software systems have been selected because they have a rich history of publicly available vulnerabilities, and they differ significantly in functionality and size. The presence of actual exploits is used for evaluating them. The results show that exploitability metrics in either system do not correlate strongly with the existence of exploits, and have a high false positive rate.
Keywords: Internet; security of data; software metrics; CVSS base metrics; Internet Explorer; Microsoft rating system; Microsoft software vulnerabilities; Windows 7; expert opinions approach; exploitability metrics; publicly available vulnerabilities; technical analysis approach; vulnerability security risk metrics; Browsers; Indexes; Internet; Measurement; Security; Software; CVSS Base Metrics; Empirical Software Engineering; Exploits; Microsoft Exploitability Index; Microsoft Rating System; Risk assessment; Severity; Software Vulnerability (ID#: 15-7392)


Afzal, Z.; Lindskog, S., “Automated Testing of IDS Rules,” in Software Testing, Verification and Validation Workshops (ICSTW), 2015 IEEE Eighth International Conference on, vol., no., pp. 1–2, 13–17 April 2015. doi:10.1109/ICSTW.2015.7107461
Abstract: As technology becomes ubiquitous, new vulnerabilities are being discovered at a rapid rate. Security experts continuously find ways to detect attempts to exploit those vulnerabilities. The outcome is an extremely large and complex rule set used by Intrusion Detection Systems (IDSs) to detect and prevent the vulnerabilities. The rule sets have become so large that it seems infeasible to verify their precision or identify overlapping rules. This work proposes a methodology consisting of a set of tools that will make rule management easier.
Keywords: program testing; security of data; IDS rules; automated testing; intrusion detection systems; Conferences; Generators; Intrusion detection; Payloads; Protocols; Servers; Testing (ID#: 15-7393)


Enache, A.-C.; Ioniţă, M.; Sgârciu, V., “An Immune Intelligent Approach for Security Assurance,” in Cyber Situational Awareness, Data Analytics and Assessment (CyberSA), 2015 International Conference on, vol., no., pp. 1–5, 8–9 June 2015. doi:10.1109/CyberSA.2015.7166116
Abstract: Information Security Assurance implies ensuring the integrity, confidentiality and availability of critical assets for an organization. The large amount of events to monitor in a fluid system in terms of topology and variety of new hardware or software, overwhelms monitoring controls. Furthermore, the multi-facets of cyber threats today makes it difficult even for security experts to handle and keep up-to-date. Hence, automatic “intelligent” tools are needed to address these issues. In this paper, we describe a ‘work in progress’ contribution on intelligent based approach to mitigating security threats. The main contribution of this work is an anomaly based IDS model with active response that combines artificial immune systems and swarm intelligence with the SVM classifier. Test results for the NSL-KDD dataset prove the proposed approach can outperform the standard classifier in terms of attack detection rate and false alarm rate, while reducing the number of features in the dataset.
Keywords: artificial immune systems; pattern classification; security of data; support vector machines; NSL-KDD dataset; SVM classifier; anomaly based IDS model; artificial immune system; asset availability; asset confidentiality; asset integrity; attack detection rate; cyber threats; false alarm rate; immune intelligent approach; information security assurance; intrusion detection system; security threats mitigation; swarm intelligence; Feature extraction; Immune system; Intrusion detection; Particle swarm optimization; Silicon; Support vector machines; Binary Bat Algorithm; Dendritic Cell Algorithm; IDS; SVM (ID#: 15-7394)


Sundarkumar, G. Ganesh; Ravi, Vadlamani; Nwogu, Ifeoma; Govindaraju, Venu, “Malware Detection via API Calls, Topic Models and Machine Learning,” in Automation Science and Engineering (CASE), 2015 IEEE International Conference on, vol., no., pp. 1212–1217, 24–28 Aug. 2015. doi:10.1109/CoASE.2015.7294263
Abstract: Dissemination of malicious code, also known as malware, poses severe challenges to cyber security. Malware authors embed software in seemingly innocuous executables, unknown to a user. The malware subsequently interacts with security-critical OS resources on the host system or network, in order to destroy their information or to gather sensitive information such as passwords and credit card numbers. Malware authors typically use Application Programming Interface (API) calls to perpetrate these crimes. We present a model that uses text mining and topic modeling to detect malware, based on the types of API call sequences. We evaluated our technique on two publicly available datasets. We observed that Decision Tree and Support Vector Machine yielded significant results. We performed t-test with respect to sensitivity for the two models and found that statistically there is no significant difference between these models. We recommend Decision Tree as it yields ‘if-then’ rules, which could be used as an early warning expert system.
Keywords: Feature extraction; Grippers; Sensitivity; Support vector machines; Text mining; Trojan horses (ID#: 15-7395)


Szpyrka, M.; Szczur, A.; Bazan, J.G.; Dydo, L., “Extracting of Temporal Patterns from Data for Hierarchical Classifiers Construction,” in Cybernetics (CYBCONF), 2015 IEEE 2nd International Conference on, vol., no., pp. 330–335, 24–26 June 2015. doi:10.1109/CYBConf.2015.7175955
Abstract: A method of automatic extracting of temporal patterns from learning data for constructing hierarchical behavioral patterns based classifiers is considered in the paper. The presented approach can be used to complete the knowledge provided by experts or to discover the knowledge automatically if no expert knowledge is accessible. Formal description of temporal patterns is provided and an algorithm for automatic patterns extraction and evaluation is described. A system for packet-based network traffic anomaly detection is used to illustrate the considered ideas.
Keywords: computer network security; data mining; learning (artificial intelligence); pattern classification; temporal logic; automatic pattern extraction; data temporal pattern extraction; hierarchical behavioral pattern; hierarchical classifier construction; knowledge discovery; learning data; packet-based network traffic anomaly detection; Clustering algorithms; Data mining; Decision trees; Entropy; Petri nets; Ports (Computers); Servers; LTL logic; feature extraction; hierarchical classifiers; network anomaly detection; temporal patterns (ID#: 15-7396)


Kobashi, T.; Yoshizawa, M.; Washizaki, H.; Fukazawa, Y.; Yoshioka, N.; Okubo, T.; Kaiya, H., “TESEM: A Tool for Verifying Security Design Pattern Applications by Model Testing,” in Software Testing, Verification and Validation (ICST), 2015 IEEE 8th International Conference on, vol., no., pp. 1–8, 13–17 April 2015. doi:10.1109/ICST.2015.7102633
Abstract: Because software developers are not necessarily security experts, identifying potential threats and vulnerabilities in the early stage of the development process (e.g., the requirement- or design-phase) is insufficient. Even if these issues are addressed at an early stage, it does not guarantee that the final software product actually satisfies security requirements. To realize secure designs, we propose extended security patterns, which include requirement-and design-level patterns as well as a new model testing process. Our approach is implemented in a tool called TESEM (Test Driven Secure Modeling Tool), which supports pattern applications by creating a script to execute model testing automatically. During an early development stage, the developer specifies threats and vulnerabilities in the target system, and then TESEM verifies whether the security patterns are properly applied and assesses whether these vulnerabilities are resolved.
Keywords: formal specification; program testing; program verification; security of data; software tools; TESEM; design-level pattern; development process; development stage; model testing; requirement-level pattern; security design pattern application verification; software product; test driven secure modeling tool; threat specification; vulnerability specification; Generators; Security; Software; Systematics; Testing; Unified modeling language (ID#: 15-7397)


Olabelurin, A.; Veluru, S.; Healing, A.; Rajarajan, M., “Entropy Clustering Approach for Improving Forecasting in DDoS Attacks,” in Networking, Sensing and Control (ICNSC), 2015 IEEE 12th International Conference on, vol., no., pp. 315–320, 9–11 April 2015. doi:10.1109/ICNSC.2015.7116055
Abstract: Volume anomaly such as distributed denial-of-service (DDoS) has been around for ages but with advancement in technologies, they have become stronger, shorter and weapon of choice for attackers. Digital forensic analysis of intrusions using alerts generated by existing intrusion detection system (IDS) faces major challenges, especially for IDS deployed in large networks. In this paper, the concept of automatically sifting through a huge volume of alerts to distinguish the different stages of a DDoS attack is developed. The proposed novel framework is purpose-built to analyze multiple logs from the network for proactive forecast and timely detection of DDoS attacks, through a combined approach of Shannon-entropy concept and clustering algorithm of relevant feature variables. Experimental studies on a cyber-range simulation dataset from the project industrial partners show that the technique is able to distinguish precursor alerts for DDoS attacks, as well as the attack itself with a very low false positive rate (FPR) of 22.5%. Application of this technique greatly assists security experts in network analysis to combat DDoS attacks.
Keywords: computer network security; digital forensics; entropy; forecasting theory; pattern clustering; DDoS attacks; FPR; IDS; Shannon-entropy concept; clustering algorithm; cyber-range simulation dataset; digital forensic analysis; distributed denial-of-service; entropy clustering approach; false positive rate; forecasting; intrusion detection system; network analysis; proactive forecast; project industrial partner; volume anomaly; Algorithm design and analysis; Clustering algorithms; Computer crime; Entropy; Feature extraction; Ports (Computers); Shannon entropy; alert management; distributed denial-of-service (DDoS) detection; k-means clustering analysis; network security; online anomaly detection (ID#: 15-7398)


Solic, K.; Velki, T.; Galba, T., “Empirical Study on ICT System’s Users’ Risky Behavior and Security Awareness,” in Information and Communication Technology, Electronics and Microelectronics (MIPRO), 2015 38th International Convention on, vol., no., pp. 1356–1359, 25–29 May 2015. doi:10.1109/MIPRO.2015.7160485
Abstract: In this study authors gathered information on ICT users from different areas in Croatia with different knowledge, experience, working place, age and gender background in order to examine today’s situation in the Republic of Croatia (n=701) regarding ICT users’ potentially risky behavior and security awareness. To gather all desired data validated Users’ Information Security Awareness Questionnaire (UISAQ) was used. Analysis outcome represent results of ICT users in Croatia regarding 6 subareas (mean of items): Usual risky behavior (x1=4.52), Personal computer maintenance (x2=3.18), Borrowing access data (x3=4.74), Criticism on security in communications (x4=3.48), Fear of losing data (x5=2.06), Rating importance of backup (x6=4.18). In this work comparison between users regarding demographic variables (age, gender, professional qualification, occupation, managing job position and institution category) is given. Maybe the most interesting information is percentage of questioned users that have revealed their password for professional e-mail system (28.8%). This information should alert security experts and security managers in enterprises, government institutions and also schools and faculties. Results of this study should be used to develop solutions and induce actions aiming to increase awareness among Internet users on information security and privacy issues.
Keywords: Internet; data privacy; electronic mail; risk analysis; security of data; ICT system; Internet users; Republic of Croatia; UISAQ; age; enterprises; experience; gender background; government institutions; institution category; job position; knowledge; occupation; personal computer maintenance; privacy issues; professional e-mail system; professional qualification; security awareness; security experts; security managers; user information security awareness questionnaire; user risky behavior; working place; Electronic mail; Government; Information security; Microcomputers; Phase change materials; Qualifications (ID#: 15-7399)


Bermudez, I.; Tongaonkar, A.; Iliofotou, M.; Mellia, M.; Munafò, M.M., “Automatic Protocol Field Inference for Deeper Protocol Understanding,” in IFIP Networking Conference (IFIP Networking), 2015, pp. 1–9, 20–22 May 2015. doi:10.1109/IFIPNetworking.2015.7145307
Abstract: Security tools have evolved dramatically in the recent years to combat the increasingly complex nature of attacks, but to be effective these tools need to be configured by experts that understand network protocols thoroughly. In this paper we present FieldHunter, which automatically extracts fields and infers their types; providing this much needed information to the security experts for keeping pace with the increasing rate of new network applications and their underlying protocols. FieldHunter relies on collecting application messages from multiple sessions and then applying statistical correlations is able to infer the types of the fields. These statistical correlations can be between different messages or other associations with meta-data such as message length, client or server IPs. Our system is designed to extract and infer fields from both binary and textual protocols. We evaluated FieldHunter on real network traffic collected in ISP networks from three different continents. FieldHunter was able to extract security relevant fields and infer their nature for well documented network protocols (such as DNS and MSNP) as well as protocols for which the specifications are not publicly available (such as SopCast) and from malware such as (Ramnit).
Keywords: Internet; invasive software; meta data; statistical analysis; telecommunication traffic; transport protocols; DNS; FieldHunter; ISP network; Internet protocol; Internet service provider; MSNP; Microsoft notification protocol; Ramnit; SopCast; automatic protocol field inference; binary protocol; client IP; domain name system; field extraction; malware; message length; metadata; network protocol; network traffic; protocol understanding; security tool; server IP; statistical correlation; textual protocol; Correlation; Entropy; IP networks; Protocols; Radiation detectors; Security; Servers (ID#: 15-7400)


Oprea, A.; Zhou Li; Ting-Fang Yen; Chin, S.H.; Alrwais, S., “Detection of Early-Stage Enterprise Infection by Mining Large-Scale Log Data,” in Dependable Systems and Networks (DSN), 2015 45th Annual IEEE/IFIP International Conference on, vol., no., pp. 45–56, 22–25 June 2015. doi:10.1109/DSN.2015.14
Abstract: Recent years have seen the rise of sophisticated attacks including advanced persistent threats (APT) which pose severe risks to organizations and governments. Additionally, new malware strains appear at a higher rate than ever before. Since many of these malware evade existing security products, traditional defenses deployed by enterprises today often fail at detecting infections at an early stage. We address the problem of detecting early-stage APT infection by proposing a new framework based on belief propagation inspired from graph theory. We demonstrate that our techniques perform well on two large datasets. We achieve high accuracy on two months of DNS logs released by Los Alamos National Lab (LANL), which include APT infection attacks simulated by LANL domain experts. We also apply our algorithms to 38TB of web proxy logs collected at the border of a large enterprise and identify hundreds of malicious domains overlooked by state-of-the-art security products.
Keywords: Internet; belief networks; business data processing; data mining; graph theory; invasive software; APT infection attacks; DNS logs; LANL; Los Alamos National Lab; Web proxy logs; advanced persistent threats; belief propagation; early-stage APT infection; early-stage enterprise infection detection; large-scale log data mining; malware strains; security products; Belief propagation; Electronic mail; IP networks; Malware; Servers; System-on-chip; Advanced Persistent Threats; Belief Propagation; Data Analysis (ID#: 15-7401)


Antunes, N.; Vieira, M., “On the Metrics for Benchmarking Vulnerability Detection Tools,” in Dependable Systems and Networks (DSN), 2015 45th Annual IEEE/IFIP International Conference on, vol., no., pp. 505–516, 22–25 June 2015. doi:10.1109/DSN.2015.30
Abstract: Research and practice show that the effectiveness of vulnerability detection tools depends on the concrete use scenario. Benchmarking can be used for selecting the most appropriate tool, helping assessing and comparing alternative solutions, but its effectiveness largely depends on the adequacy of the metrics. This paper studies the problem of selecting the metrics to be used in a benchmark for software vulnerability detection tools. First, a large set of metrics is gathered and analyzed according to the characteristics of a good metric for the vulnerability detection domain. Afterwards, the metrics are analyzed in the context of specific vulnerability detection scenarios to understand their effectiveness and to select the most adequate one for each scenario. Finally, an MCDA algorithm together with experts’ judgment is applied to validate the conclusions. Results show that although some of the metrics traditionally used like precision and recall are adequate in some scenarios, others require alternative metrics that are seldom used in the benchmarking area.
Keywords: invasive software; software metrics; MCDA algorithm; alternative metrics; benchmarking vulnerability detection tool; software vulnerability detection tool; Benchmark testing; Concrete; Context; Measurement; Security; Standards; Automated Tools; Benchmarking; Security Metrics; Software Vulnerabilities; Vulnerability Detection (ID#: 15-7402)


Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.