Visible to the public Biblio

Found 197 results

Filters: Keyword is Tools  [Clear All Filters]
2020-04-03
Aires Urquiza, Abraão, AlTurki, Musab A., Kanovich, Max, Ban Kirigin, Tajana, Nigam, Vivek, Scedrov, Andre, Talcott, Carolyn.  2019.  Resource-Bounded Intruders in Denial of Service Attacks. 2019 IEEE 32nd Computer Security Foundations Symposium (CSF). :382—38214.
Denial of Service (DoS) attacks have been a serious security concern, as no service is, in principle, protected against them. Although a Dolev-Yao intruder with unlimited resources can trivially render any service unavailable, DoS attacks do not necessarily have to be carried out by such (extremely) powerful intruders. It is useful in practice and more challenging for formal protocol verification to determine whether a service is vulnerable even to resource-bounded intruders that cannot generate or intercept arbitrary large volumes of traffic. This paper proposes a novel, more refined intruder model where the intruder can only consume at most some specified amount of resources in any given time window. Additionally, we propose protocol theories that may contain timeouts and specify service resource usage during protocol execution. In contrast to the existing resource-conscious protocol verification models, our model allows finer and more subtle analysis of DoS problems. We illustrate the power of our approach by representing a number of classes of DoS attacks, such as, Slow, Asymmetric and Amplification DoS attacks, exhausting different types of resources of the target, such as, number of workers, processing power, memory, and network bandwidth. We show that the proposed DoS problem is undecidable in general and is PSPACE-complete for the class of resource-bounded, balanced systems. Finally, we implemented our formal verification model in the rewriting logic tool Maude and analyzed a number of DoS attacks in Maude using Rewriting Modulo SMT in an automated fashion.
Künnemann, Robert, Esiyok, Ilkan, Backes, Michael.  2019.  Automated Verification of Accountability in Security Protocols. 2019 IEEE 32nd Computer Security Foundations Symposium (CSF). :397—39716.
Accountability is a recent paradigm in security protocol design which aims to eliminate traditional trust assumptions on parties and hold them accountable for their misbehavior. It is meant to establish trust in the first place and to recognize and react if this trust is violated. In this work, we discuss a protocol-agnostic definition of accountability: a protocol provides accountability (w.r.t. some security property) if it can identify all misbehaving parties, where misbehavior is defined as a deviation from the protocol that causes a security violation. We provide a mechanized method for the verification of accountability and demonstrate its use for verification and attack finding on various examples from the accountability and causality literature, including Certificate Transparency and Krollˆ\textbackslashtextbackslashprimes Accountable Algorithms protocol. We reach a high degree of automation by expressing accountability in terms of a set of trace properties and show their soundness and completeness.
2020-03-27
Walker, Aaron, Amjad, Muhammad Faisal, Sengupta, Shamik.  2019.  Cuckoo’s Malware Threat Scoring and Classification: Friend or Foe? 2019 IEEE 9th Annual Computing and Communication Workshop and Conference (CCWC). :0678–0684.
Malware threat classification involves understanding the behavior of the malicious software and how it affects a victim host system. Classifying threats allows for measured response appropriate to the risk involved. Malware incident response depends on many automated tools for the classification of threat to help identify the appropriate reaction to a threat alert. Cuckoo Sandbox is one such tool which can be used for automated analysis of malware and one method of threat classification provided is a threat score. A security analyst might submit a suspicious file to Cuckoo for analysis to determine whether or not the file contains malware or performs potentially malicious behavior on a system. Cuckoo is capable of producing a report of this behavior and ranks the severity of the observed actions as a score from one to ten, with ten being the most severe. As such, a malware sample classified as an 8 would likely take priority over a sample classified as a 3. Unfortunately, this scoring classification can be misleading due to the underlying methodology of severity classification. In this paper we demonstrate why the current methodology of threat scoring is flawed and therefore we believe it can be improved with greater emphasis on analyzing the behavior of the malware. This allows for a threat classification rating which scales with the risk involved in the malware behavior.
Jadidi, Mahya Soleimani, Zaborski, Mariusz, Kidney, Brian, Anderson, Jonathan.  2019.  CapExec: Towards Transparently-Sandboxed Services. 2019 15th International Conference on Network and Service Management (CNSM). :1–5.
Network services are among the riskiest programs executed by production systems. Such services execute large quantities of complex code and process data from arbitrary — and untrusted — network sources, often with high levels of system privilege. It is desirable to confine system services to a least-privileged environment so that the potential damage from a malicious attacker can be limited, but existing mechanisms for sandboxing services require invasive and system-specific code changes and are insufficient to confine broad classes of network services. Rather than sandboxing one service at a time, we propose that the best place to add sandboxing to network services is in the service manager that starts those services. As a first step towards this vision, we propose CapExec, a process supervisor that can execute a single service within a sandbox based on a service declaration file in which, required resources whose limited access to are supported by Caper services, are specified. Using the Capsicum compartmentalization framework and its Casper service framework, CapExec provides robust application sandboxing without requiring any modifications to the application itself. We believe that this is the first step towards ubiquitous sandboxing of network services without the costs of virtualization.
Huang, Shiyou, Guo, Jianmei, Li, Sanhong, Li, Xiang, Qi, Yumin, Chow, Kingsum, Huang, Jeff.  2019.  SafeCheck: Safety Enhancement of Java Unsafe API. 2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE). :889–899.
Java is a safe programming language by providing bytecode verification and enforcing memory protection. For instance, programmers cannot directly access the memory but have to use object references. Yet, the Java runtime provides an Unsafe API as a backdoor for the developers to access the low- level system code. Whereas the Unsafe API is designed to be used by the Java core library, a growing community of third-party libraries use it to achieve high performance. The Unsafe API is powerful, but dangerous, which leads to data corruption, resource leaks and difficult-to-diagnose JVM crash if used improperly. In this work, we study the Unsafe crash patterns and propose a memory checker to enforce memory safety, thus avoiding the JVM crash caused by the misuse of the Unsafe API at the bytecode level. We evaluate our technique on real crash cases from the openJDK bug system and real-world applications from AJDK. Our tool reduces the efforts from several days to a few minutes for the developers to diagnose the Unsafe related crashes. We also evaluate the runtime overhead of our tool on projects using intensive Unsafe operations, and the result shows that our tool causes a negligible perturbation to the execution of the applications.
Coblenz, Michael, Sunshine, Joshua, Aldrich, Jonathan, Myers, Brad A..  2019.  Smarter Smart Contract Development Tools. 2019 IEEE/ACM 2nd International Workshop on Emerging Trends in Software Engineering for Blockchain (WETSEB). :48–51.
Much recent work focuses on finding bugs and security vulnerabilities in smart contracts written in existing languages. Although this approach may be helpful, it does not address flaws in the underlying programming language, which can facilitate writing buggy code in the first place. We advocate a re-thinking of the blockchain software engineering tool set, starting with the programming language in which smart contracts are written. In this paper, we propose and justify requirements for a new generation of blockchain software development tools. New tools should (1) consider users' needs as a primary concern; (2) seek to facilitate safe development by detecting relevant classes of serious bugs at compile time; (3) as much as possible, be blockchain-agnostic, given the wide variety of different blockchain platforms available, and leverage the properties that are common among blockchain environments to improve safety and developer effectiveness.
2020-03-23
Rustgi, Pulkit, Fung, Carol.  2019.  Demo: DroidNet - An Android Permission Control Recommendation System Based on Crowdsourcing. 2019 IFIP/IEEE Symposium on Integrated Network and Service Management (IM). :737–738.
Mobile and web application security, particularly the areas of data privacy, has raised much concerns from the public in recent years. Most applications, or apps for short, are installed without disclosing full information to users and clearly stating what the application has access to, which often raises concern when users become aware of unnecessary information being collected. Unfortunately, most users have little to no technical expertise in regards to what permissions should be turned on and can only rely on their intuition and past experiences to make relatively uninformed decisions. To solve this problem, we developed DroidNet, which is a crowd-sourced Android recommendation tool and framework. DroidNet alleviates privacy concerns and presents users with high confidence permission control recommendations based on the decision from expert users who are using the same apps. This paper explains the general framework, principles, and model behind DroidNet while also providing an experimental setup design which shows the effectiveness and necessity for such a tool.
2020-03-18
Offenberger, Spencer, Herman, Geoffrey L., Peterson, Peter, Sherman, Alan T, Golaszewski, Enis, Scheponik, Travis, Oliva, Linda.  2019.  Initial Validation of the Cybersecurity Concept Inventory: Pilot Testing and Expert Review. 2019 IEEE Frontiers in Education Conference (FIE). :1–9.
We analyze expert review and student performance data to evaluate the validity of the Cybersecurity Concept Inventory (CCI) for assessing student knowledge of core cybersecurity concepts after a first course on the topic. A panel of 12 experts in cybersecurity reviewed the CCI, and 142 students from six different institutions took the CCI as a pilot test. The panel reviewed each item of the CCI and the overwhelming majority rated every item as measuring appropriate cybersecurity knowledge. We administered the CCI to students taking a first cybersecurity course either online or proctored by the course instructor. We applied classical test theory to evaluate the quality of the CCI. This evaluation showed that the CCI is sufficiently reliable for measuring student knowledge of cybersecurity and that the CCI may be too difficult as a whole. We describe the results of the expert review and the pilot test and provide recommendations for the continued improvement of the CCI.
Zhang, Ruipeng, Xu, Chen, Xie, Mengjun.  2019.  Powering Hands-on Cybersecurity Practices with Cloud Computing. 2019 IEEE 27th International Conference on Network Protocols (ICNP). :1–2.
Cybersecurity education and training have gained increasing attention in all sectors due to the prevalence and quick evolution of cyberattacks. A variety of platforms and systems have been proposed and developed to accommodate the growing needs of hands-on cybersecurity practice. However, those systems are either lacking sufficient flexibility (e.g., tied to a specific virtual computing service provider, little customization support) or difficult to scale. In this work, we present a cloud-based platform named EZSetup for hands-on cybersecurity practice at scale and our experience of using it in class. EZSetup is customizable and cloud-agnostic. Users can create labs through an intuitive Web interface and deploy them onto one or multiple clouds. We have used NSF funded Chameleon cloud and our private OpenStack cloud to develop, test and deploy EZSetup. We have developed 14 network and security labs using the tool and included six labs in an undergraduate network security course in spring 2019. Our survey results show that students have very positive feedback on using EZSetup and computing clouds for hands-on cybersecurity practice.
2020-03-16
Singh, Rina, Graves, Jeffrey A., Anantharaj, Valentine, Sukumar, Sreenivas R..  2019.  Evaluating Scientific Workflow Engines for Data and Compute Intensive Discoveries. 2019 IEEE International Conference on Big Data (Big Data). :4553–4560.
Workflow engines used to script scientific experiments involving numerical simulation, data analysis, instruments, edge sensors, and artificial intelligence have to deal with the complexities of hardware, software, resource availability, and the collaborative nature of science. In this paper, we survey workflow engines used in data-intensive and compute-intensive discovery pipelines from scientific disciplines such as astronomy, high energy physics, earth system science, bio-medicine, and material science and present a qualitative analysis of their respective capabilities. We compare 5 popular workflow engines and their differentiated approach to job orchestration, job launching, data management and provenance, security authentication, ease-ofuse, workflow description, and scripting semantics. The comparisons presented in this paper allow practitioners to choose the appropriate engine for their scientific experiment and lead to recommendations for future work.
Tahat, Amer, Joshi, Sarang, Goswami, Pronnoy, Ravindran, Binoy.  2019.  Scalable Translation Validation of Unverified Legacy OS Code. 2019 Formal Methods in Computer Aided Design (FMCAD). :1–9.
Formally verifying functional and security properties of a large-scale production operating system is highly desirable. However, it is challenging as such OSes are often written in multiple source languages that have no formal semantics - a prerequisite for formal reasoning. To avoid expensive formalization of the semantics of multiple high-level source languages, we present a lightweight and rigorous verification toolchain that verifies OS code at the binary level, targeting ARM machines. To reason about ARM instructions, we first translate the ARM Specification Language that describes the semantics of the ARMv8 ISA into the PVS7 theorem prover and verify the translation. We leverage the radare2 reverse engineering tool to decode ARM binaries into PVS7 and verify the translation. Our translation verification methodology is a lightweight formal validation technique that generates large-scale instruction emulation test lemmas whose proof obligations are automatically discharged. To demonstrate our verification methodology, we apply the technique on two OSes: Google's Zircon and a subset of Linux. We extract a set of 370 functions from these OSes, translate them into PVS7, and verify the correctness of the translation by automatically discharging hundreds of thousands of proof obligations and tests. This took 27.5 person-months to develop.
2020-03-12
Dogruluk, Ertugrul, Costa, Antonio, Macedo, Joaquim.  2019.  A Detection and Defense Approach for Content Privacy in Named Data Network. 2019 10th IFIP International Conference on New Technologies, Mobility and Security (NTMS). :1–5.

The Named Data Network (NDN) is a promising network paradigm for content distribution based on caching. However, it may put consumer privacy at risk, as the adversary may identify the content, the name and the signature (namely a certificate) through side-channel timing responses from the cache of the routers. The adversary may identify the content name and the consumer node by distinguishing between cached and un- cached contents. In order to mitigate the timing attack, effective countermeasure methods have been proposed by other authors, such as random caching, random freshness, and probabilistic caching. In this work, we have implemented a timing attack scenario to evaluate the efficiency of these countermeasures and to demonstrate how the adversary can be detected. For this goal, a brute force timing attack scenario based on a real topology was developed, which is the first brute force attack model applied in NDN. Results show that the adversary nodes can be effectively distinguished from other legitimate consumers during the attack period. It is also proposed a multi-level mechanism to detect an adversary node. Through this approach, the content distribution performance can be mitigated against the attack.

Vieira, Leandro, Santos, Leonel, Gon\c calves, Ramiro, Rabadão, Carlos.  2019.  Identifying Attack Signatures for the Internet of Things: An IP Flow Based Approach. 2019 14th Iberian Conference on Information Systems and Technologies (CISTI). :1–7.

At the time of more and more devices being connected to the internet, personal and sensitive information is going around the network more than ever. Thus, security and privacy regarding IoT communications, devices, and data are a concern due to the diversity of the devices and protocols used. Since traditional security mechanisms cannot always be adequate due to the heterogeneity and resource limitations of IoT devices, we conclude that there are still several improvements to be made to the 2nd line of defense mechanisms like Intrusion Detection Systems. Using a collection of IP flows, we can monitor the network and identify properties of the data that goes in and out. Since network flows collection have a smaller footprint than packet capturing, it makes it a better choice towards the Internet of Things networks. This paper aims to study IP flow properties of certain network attacks, with the goal of identifying an attack signature only by observing those properties.

2020-03-09
Chhillar, Dheeraj, Sharma, Kalpana.  2019.  ACT Testbot and 4S Quality Metrics in XAAS Framework. 2019 International Conference on Machine Learning, Big Data, Cloud and Parallel Computing (COMITCon). :503–509.

The purpose of this paper is to analyze all Cloud based Service Models, Continuous Integration, Deployment and Delivery process and propose an Automated Continuous Testing and testing as a service based TestBot and metrics dashboard which will be integrated with all existing automation, bug logging, build management, configuration and test management tools. Recently cloud is being used by organizations to save time, money and efforts required to setup and maintain infrastructure and platform. Continuous Integration and Delivery is in practice nowadays within Agile methodology to give capability of multiple software releases on daily basis and ensuring all the development, test and Production environments could be synched up quickly. In such an agile environment there is need to ramp up testing tools and processes so that overall regression testing including functional, performance and security testing could be done along with build deployments at real time. To support this phenomenon, we researched on Continuous Testing and worked with industry professionals who are involved in architecting, developing and testing the software products. A lot of research has been done towards automating software testing so that testing of software product could be done quickly and overall testing process could be optimized. As part of this paper we have proposed ACT TestBot tool, metrics dashboard and coined 4S quality metrics term to quantify quality of the software product. ACT testbot and metrics dashboard will be integrated with Continuous Integration tools, Bug reporting tools, test management tools and Data Analytics tools to trigger automation scripts, continuously analyze application logs, open defects automatically and generate metrics reports. Defect pattern report will be created to support root cause analysis and to take preventive action.

Kourai, Kenichi, Shiota, Yuji.  2019.  Consistent Offline Update of Suspended Virtual Machines in Clouds. 2019 IEEE Intl Conf on Dependable, Autonomic and Secure Computing, Intl Conf on Pervasive Intelligence and Computing, Intl Conf on Cloud and Big Data Computing, Intl Conf on Cyber Science and Technology Congress (DASC/PiCom/CBDCom/CyberSciTech). :58–65.

In Infrastructure-as-a-Service clouds, there exist many virtual machines (VMs) that are not used for a long time. For such VMs, many vulnerabilities are often found in installed software while VMs are suspended. If security updates are applied to such VMs after the VMs are resumed, the VMs easily suffer from attacks via the Internet. To solve this problem, offline update of VMs has been proposed, but some approaches have to permit cloud administrators to resume users' VMs. The others are applicable only to completely stopped VMs and often corrupt virtual disks if they are applied to suspended VMs. In addition, it is sometimes difficult to accurately emulate security updates offline. In this paper, we propose OUassister, which enables consistent offline update of suspended VMs. OUassister emulates security updates of VMs offline in a non-intrusive manner and applies the emulation results to the VMs online. This separation prevents virtual disks of even suspended VMs from being corrupted. For more accurate emulation of security updates, OUassister provides an emulation environment using a technique called VM introspection. Using this environment, it automatically extracts updated files and executed scripts. We have implemented OUassister in Xen and confirmed that the time for critical online update was largely reduced.

Zhan, Dongyang, Li, Huhua, Ye, Lin, Zhang, Hongli, Fang, Binxing, Du, Xiaojiang.  2019.  A Low-Overhead Kernel Object Monitoring Approach for Virtual Machine Introspection. ICC 2019 - 2019 IEEE International Conference on Communications (ICC). :1–6.

Monitoring kernel object modification of virtual machine is widely used by virtual-machine-introspection-based security monitors to protect virtual machines in cloud computing, such as monitoring dentry objects to intercept file operations, etc. However, most of the current virtual machine monitors, such as KVM and Xen, only support page-level monitoring, because the Intel EPT technology can only monitor page privilege. If the out-of-virtual-machine security tools want to monitor some kernel objects, they need to intercept the operation of the whole memory page. Since there are some other objects stored in the monitored pages, the modification of them will also trigger the monitor. Therefore, page-level memory monitor usually introduces overhead to related kernel services of the target virtual machine. In this paper, we propose a low-overhead kernel object monitoring approach to reduce the overhead caused by page-level monitor. The core idea is to migrate the target kernel objects to a protected memory area and then to monitor the corresponding new memory pages. Since the new pages only contain the kernel objects to be monitored, other kernel objects will not trigger our monitor. Therefore, our monitor will not introduce runtime overhead to the related kernel service. The experimental results show that our system can monitor target kernel objects effectively only with very low overhead.

Nilizadeh, Shirin, Noller, Yannic, Pasareanu, Corina S..  2019.  DifFuzz: Differential Fuzzing for Side-Channel Analysis. 2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE). :176–187.
Side-channel attacks allow an adversary to uncover secret program data by observing the behavior of a program with respect to a resource, such as execution time, consumed memory or response size. Side-channel vulnerabilities are difficult to reason about as they involve analyzing the correlations between resource usage over multiple program paths. We present DifFuzz, a fuzzing-based approach for detecting side-channel vulnerabilities related to time and space. DifFuzz automatically detects these vulnerabilities by analyzing two versions of the program and using resource-guided heuristics to find inputs that maximize the difference in resource consumption between secret-dependent paths. The methodology of DifFuzz is general and can be applied to programs written in any language. For this paper, we present an implementation that targets analysis of Java programs, and uses and extends the Kelinci and AFL fuzzers. We evaluate DifFuzz on a large number of Java programs and demonstrate that it can reveal unknown side-channel vulnerabilities in popular applications. We also show that DifFuzz compares favorably against Blazer and Themis, two state-of-the-art analysis tools for finding side-channels in Java programs.
Calzavara, Stefano, Conti, Mauro, Focardi, Riccardo, Rabitti, Alvise, Tolomei, Gabriele.  2019.  Mitch: A Machine Learning Approach to the Black-Box Detection of CSRF Vulnerabilities. 2019 IEEE European Symposium on Security and Privacy (EuroS P). :528–543.
Cross-Site Request Forgery (CSRF) is one of the oldest and simplest attacks on the Web, yet it is still effective on many websites and it can lead to severe consequences, such as economic losses and account takeovers. Unfortunately, tools and techniques proposed so far to identify CSRF vulnerabilities either need manual reviewing by human experts or assume the availability of the source code of the web application. In this paper we present Mitch, the first machine learning solution for the black-box detection of CSRF vulnerabilities. At the core of Mitch there is an automated detector of sensitive HTTP requests, i.e., requests which require protection against CSRF for security reasons. We trained the detector using supervised learning techniques on a dataset of 5,828 HTTP requests collected on popular websites, which we make available to other security researchers. Our solution outperforms existing detection heuristics proposed in the literature, allowing us to identify 35 new CSRF vulnerabilities on 20 major websites and 3 previously undetected CSRF vulnerabilities on production software already analyzed using a state-of-the-art tool.
2020-03-02
Sultana, Kazi Zakia, Chong, Tai-Yin.  2019.  A Proposed Approach to Build an Automated Software Security Assessment Framework using Mined Patterns and Metrics. 2019 IEEE International Conference on Computational Science and Engineering (CSE) and IEEE International Conference on Embedded and Ubiquitous Computing (EUC). :176–181.

Software security is a major concern of the developers who intend to deliver a reliable software. Although there is research that focuses on vulnerability prediction and discovery, there is still a need for building security-specific metrics to measure software security and vulnerability-proneness quantitatively. The existing methods are either based on software metrics (defined on the physical characteristics of code; e.g. complexity or lines of code) which are not security-specific or some generic patterns known as nano-patterns (Java method-level traceable patterns that characterize a Java method or function). Other methods predict vulnerabilities using text mining approaches or graph algorithms which perform poorly in cross-project validation and fail to be a generalized prediction model for any system. In this paper, we envision to construct an automated framework that will assist developers to assess the security level of their code and guide them towards developing secure code. To accomplish this goal, we aim to refine and redefine the existing nano-patterns and software metrics to make them more security-centric so that they can be used for measuring the software security level of a source code (either file or function) with higher accuracy. In this paper, we present our visionary approach through a series of three consecutive studies where we (1) will study the challenges of the current software metrics and nano-patterns in vulnerability prediction, (2) will redefine and characterize the nano-patterns and software metrics so that they can capture security-specific properties of code and measure the security level quantitatively, and finally (3) will implement an automated framework for the developers to automatically extract the values of all the patterns and metrics for the given code segment and then flag the estimated security level as a feedback based on our research results. We accomplished some preliminary experiments and presented the results which indicate that our vision can be practically implemented and will have valuable implications in the community of software security.

2020-02-26
Padmanaban, R., Thirumaran, M., Sanjana, Victoria, Moshika, A..  2019.  Security Analytics For Heterogeneous Web. 2019 IEEE International Conference on System, Computation, Automation and Networking (ICSCAN). :1–6.

In recent days, Enterprises are expanding their business efficiently through web applications which has paved the way for building good consumer relationship with its customers. The major threat faced by these enterprises is their inability to provide secure environments as the web applications are prone to severe vulnerabilities. As a result of this, many security standards and tools have been evolving to handle the vulnerabilities. Though there are many vulnerability detection tools available in the present, they do not provide sufficient information on the attack. For the long-term functioning of an organization, data along with efficient analytics on the vulnerabilities is required to enhance its reliability. The proposed model thus aims to make use of Machine Learning with Analytics to solve the problem in hand. Hence, the sequence of the attack is detected through the pattern using PAA and further the detected vulnerabilities are classified using Machine Learning technique such as SVM. Probabilistic results are provided in order to obtain numerical data sets which could be used for obtaining a report on user and application behavior. Dynamic and Reconfigurable PAA with SVM Classifier is a challenging task to analyze the vulnerabilities and impact of these vulnerabilities in heterogeneous web environment. This will enhance the former processing by analysis of the origin and the pattern of the attack in a more effective manner. Hence, the proposed system is designed to perform detection of attacks. The system works on the mitigation and prevention as part of the attack prediction.

Bhatnagar, Dev, Som, Subhranil, Khatri, Sunil Kumar.  2019.  Advance Persistant Threat and Cyber Spying - The Big Picture, Its Tools, Attack Vectors and Countermeasures. 2019 Amity International Conference on Artificial Intelligence (AICAI). :828–839.

Advance persistent threat is a primary security concerns to the big organizations and its technical infrastructure, from cyber criminals seeking personal and financial information to state sponsored attacks designed to disrupt, compromising infrastructure, sidestepping security efforts thus causing serious damage to organizations. A skilled cybercriminal using multiple attack vectors and entry points navigates around the defenses, evading IDS/Firewall detection and breaching the network in no time. To understand the big picture, this paper analyses an approach to advanced persistent threat by doing the same things the bad guys do on a network setup. We will walk through various steps from foot-printing and reconnaissance, scanning networks, gaining access, maintaining access to finally clearing tracks, as in a real world attack. We will walk through different attack tools and exploits used in each phase and comparative study on their effectiveness, along with explaining their attack vectors and its countermeasures. We will conclude the paper by explaining the factors which actually qualify to be an Advance Persistent Threat.

Gountia, Debasis, Roy, Sudip.  2019.  Checkpoints Assignment on Cyber-Physical Digital Microfluidic Biochips for Early Detection of Hardware Trojans. 2019 3rd International Conference on Trends in Electronics and Informatics (ICOEI). :16–21.

Present security study involving analysis of manipulation of individual droplets of samples and reagents by digital microfluidic biochip has remarked that the biochip design flow is vulnerable to piracy attacks, hardware Trojans attacks, overproduction, Denial-of-Service attacks, and counterfeiting. Attackers can introduce bioprotocol manipulation attacks against biochips used for medical diagnosis, biochemical analysis, and frequent diseases detection in healthcare industry. Among these attacks, hardware Trojans have created a major threatening issue in its security concern with multiple ways to crack the sensitive data or alter original functionality by doing malicious operations in biochips. In this paper, we present a systematic algorithm for the assignment of checkpoints required for error-recovery of available bioprotocols in case of hardware Trojans attacks in performing operations by biochip. Moreover, it can guide the placement and timing of checkpoints so that the result of an attack is reduced, and hence enhance the security concerns of digital microfluidic biochips. Comparative study with traditional checkpoint schemes demonstrate the superiority of the proposed algorithm without overhead of the bioprotocol completion time with higher error detection accuracy.

2020-02-17
Thomopoulos, Stelios C. A..  2019.  Maritime Situational Awareness Forensics Tools for a Common Information Sharing Environment (CISE). 2019 4th International Conference on Smart and Sustainable Technologies (SpliTech). :1–5.
CISE stands for Common Information Sharing Environment and refers to an architecture and set of protocols, procedures and services for the exchange of data and information across Maritime Authorities of EU (European Union) Member States (MS's). In the context of enabling the implementation and adoption of CISE by different MS's, EU has funded a number of projects that enable the development of subsystems and adaptors intended to allow MS's to connect and make use of CISE. In this context, the Integrated Systems Laboratory (ISL) has led the development of the corresponding Hellenic and Cypriot CISE by developing a Control, Command & Information (C2I) system that unifies all partial maritime surveillance systems into one National Situational Picture Management (NSPM) system, and adaptors that allow the interconnection of the corresponding national legacy systems to CISE and the exchange of data, information and requests between the two MS's. Furthermore, a set of forensics tools that allow geospatial & time filtering and detection of anomalies, risk incidents, fake MMSIs, suspicious speed changes, collision paths, and gaps in AIS (Automatic Identification System), have been developed by combining motion models, AI, deep learning and fusion algorithms using data from different databases through CISE. This paper briefly discusses these developments within the EU CISE-2020, Hellenic CISE and CY-CISE projects and the benefits from the sharing of maritime data across CISE for both maritime surveillance and security. The prospect of using CISE for the creation of a considerably rich database that could be used for forensics analysis and detection of suspicious maritime traffic and maritime surveillance is discussed.
Ezick, James, Henretty, Tom, Baskaran, Muthu, Lethin, Richard, Feo, John, Tuan, Tai-Ching, Coley, Christopher, Leonard, Leslie, Agrawal, Rajeev, Parsons, Ben et al..  2019.  Combining Tensor Decompositions and Graph Analytics to Provide Cyber Situational Awareness at HPC Scale. 2019 IEEE High Performance Extreme Computing Conference (HPEC). :1–7.
This paper describes MADHAT (Multidimensional Anomaly Detection fusing HPC, Analytics, and Tensors), an integrated workflow that demonstrates the applicability of HPC resources to the problem of maintaining cyber situational awareness. MADHAT combines two high-performance packages: ENSIGN for large-scale sparse tensor decompositions and HAGGLE for graph analytics. Tensor decompositions isolate coherent patterns of network behavior in ways that common clustering methods based on distance metrics cannot. Parallelized graph analysis then uses directed queries on a representation that combines the elements of identified patterns with other available information (such as additional log fields, domain knowledge, network topology, whitelists and blacklists, prior feedback, and published alerts) to confirm or reject a threat hypothesis, collect context, and raise alerts. MADHAT was developed using the collaborative HPC Architecture for Cyber Situational Awareness (HACSAW) research environment and evaluated on structured network sensor logs collected from Defense Research and Engineering Network (DREN) sites using HPC resources at the U.S. Army Engineer Research and Development Center DoD Supercomputing Resource Center (ERDC DSRC). To date, MADHAT has analyzed logs with over 650 million entries.
Papakonstantinou, Nikolaos, Linnosmaa, Joonas, Alanen, Jarmo, Bashir, Ahmed Z., O'Halloran, Bryan, Van Bossuyt, Douglas L..  2019.  Early Hybrid Safety and Security Risk Assessment Based on Interdisciplinary Dependency Models. 2019 Annual Reliability and Maintainability Symposium (RAMS). :1–7.
Safety and security of complex critical infrastructures are very important for economic, environmental and social reasons. The complexity of these systems introduces difficulties in the identification of safety and security risks that emerge from interdisciplinary interactions and dependencies. The discovery of safety and security design weaknesses late in the design process and during system operation can lead to increased costs, additional system complexity, delays and possibly undesirable compromises to address safety and security weaknesses.