Visible to the public Biblio

Filters: Keyword is Manuals  [Clear All Filters]
2021-06-24
Stöckle, Patrick, Grobauer, Bernd, Pretschner, Alexander.  2020.  Automated Implementation of Windows-related Security-Configuration Guides. 2020 35th IEEE/ACM International Conference on Automated Software Engineering (ASE). :598—610.
Hardening is the process of configuring IT systems to ensure the security of the systems' components and data they process or store. The complexity of contemporary IT infrastructures, however, renders manual security hardening and maintenance a daunting task. In many organizations, security-configuration guides expressed in the SCAP (Security Content Automation Protocol) are used as a basis for hardening, but these guides by themselves provide no means for automatically implementing the required configurations. In this paper, we propose an approach to automatically extract the relevant information from publicly available security-configuration guides for Windows operating systems using natural language processing. In a second step, the extracted information is verified using the information of available settings stored in the Windows Administrative Template files, in which the majority of Windows configuration settings is defined. We show that our implementation of this approach can extract and implement 83% of the rules without any manual effort and 96% with minimal manual effort. Furthermore, we conduct a study with 12 state-of-the-art guides consisting of 2014 rules with automatic checks and show that our tooling can implement at least 97% of them correctly. We have thus significantly reduced the effort of securing systems based on existing security-configuration guides. In many organizations, security-configuration guides expressed in the SCAP (Security Content Automation Protocol) are used as a basis for hardening, but these guides by themselves provide no means for automatically implementing the required configurations. In this paper, we propose an approach to automatically extract the relevant information from publicly available security-configuration guides for Windows operating systems using natural language processing. In a second step, the extracted information is verified using the information of available settings stored in the Windows Administrative Template files, in which the majority of Windows configuration settings is defined. We show that our implementation of this approach can extract and implement 83% of the rules without any manual effort and 96% with minimal manual effort. Furthermore, we conduct a study with 12 state-of-the-art guides consisting of 2014 rules with automatic checks and show that our tooling can implement at least 97% of them correctly. We have thus significantly reduced the effort of securing systems based on existing security-configuration guides. In this paper, we propose an approach to automatically extract the relevant information from publicly available security-configuration guides for Windows operating systems using natural language processing. In a second step, the extracted information is verified using the information of available settings stored in the Windows Administrative Template files, in which the majority of Windows configuration settings is defined. We show that our implementation of this approach can extract and implement 83% of the rules without any manual effort and 96% with minimal manual effort. Furthermore, we conduct a study with 12 state-of-the-art guides consisting of 2014 rules with automatic checks and show that our tooling can implement at least 97% of them correctly. We have thus significantly reduced the effort of securing systems based on existing security-configuration guides. We show that our implementation of this approach can extract and implement 83% of the rules without any manual effort and 96% with minimal manual effort. Furthermore, we conduct a study with 12 state-of-the-art guides consisting of 2014 rules with automatic checks and show that our tooling can implement at least 97% of them correctly. We have thus significantly reduced the effort of securing systems based on existing security-configuration guides.
2021-05-05
Coulter, Rory, Zhang, Jun, Pan, Lei, Xiang, Yang.  2020.  Unmasking Windows Advanced Persistent Threat Execution. 2020 IEEE 19th International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom). :268—276.

The advanced persistent threat (APT) landscape has been studied without quantifiable data, for which indicators of compromise (IoC) may be uniformly analyzed, replicated, or used to support security mechanisms. This work culminates extensive academic and industry APT analysis, not as an incremental step in existing approaches to APT detection, but as a new benchmark of APT related opportunity. We collect 15,259 APT IoC hashes, retrieving subsequent sandbox execution logs across 41 different file types. This work forms an initial focus on Windows-based threat detection. We present a novel Windows APT executable (APT-EXE) dataset, made available to the research community. Manual and statistical analysis of the APT-EXE dataset is conducted, along with supporting feature analysis. We draw upon repeat and common APT paths access, file types, and operations within the APT-EXE dataset to generalize APT execution footprints. A baseline case analysis successfully identifies a majority of 117 of 152 live APT samples from campaigns across 2018 and 2019.

Herrera, Adrian.  2020.  Optimizing Away JavaScript Obfuscation. 2020 IEEE 20th International Working Conference on Source Code Analysis and Manipulation (SCAM). :215—220.

JavaScript is a popular attack vector for releasing malicious payloads on unsuspecting Internet users. Authors of this malicious JavaScript often employ numerous obfuscation techniques in order to prevent the automatic detection by antivirus and hinder manual analysis by professional malware analysts. Consequently, this paper presents SAFE-DEOBS, a JavaScript deobfuscation tool that we have built. The aim of SAFE-DEOBS is to automatically deobfuscate JavaScript malware such that an analyst can more rapidly determine the malicious script's intent. This is achieved through a number of static analyses, inspired by techniques from compiler theory. We demonstrate the utility of SAFE-DEOBS through a case study on real-world JavaScript malware, and show that it is a useful addition to a malware analyst's toolset.

2021-03-22
Kellogg, M., Schäf, M., Tasiran, S., Ernst, M. D..  2020.  Continuous Compliance. 2020 35th IEEE/ACM International Conference on Automated Software Engineering (ASE). :511–523.
Vendors who wish to provide software or services to large corporations and governments must often obtain numerous certificates of compliance. Each certificate asserts that the software satisfies a compliance regime, like SOC or the PCI DSS, to protect the privacy and security of sensitive data. The industry standard for obtaining a compliance certificate is an auditor manually auditing source code. This approach is expensive, error-prone, partial, and prone to regressions. We propose continuous compliance to guarantee that the codebase stays compliant on each code change using lightweight verification tools. Continuous compliance increases assurance and reduces costs. Continuous compliance is applicable to any source-code compliance requirement. To illustrate our approach, we built verification tools for five common audit controls related to data security: cryptographically unsafe algorithms must not be used, keys must be at least 256 bits long, credentials must not be hard-coded into program text, HTTPS must always be used instead of HTTP, and cloud data stores must not be world-readable. We evaluated our approach in three ways. (1) We applied our tools to over 5 million lines of open-source software. (2) We compared our tools to other publicly-available tools for detecting misuses of encryption on a previously-published benchmark, finding that only ours are suitable for continuous compliance. (3) We deployed a continuous compliance process at AWS, a large cloud-services company: we integrated verification tools into the compliance process (including auditors accepting their output as evidence) and ran them on over 68 million lines of code. Our tools and the data for the former two evaluations are publicly available.
2021-03-16
Li, M., Wang, F., Gupta, S..  2020.  Data-driven fault model development for superconducting logic. 2020 IEEE International Test Conference (ITC). :1—5.

Superconducting technology is being seriously explored for certain applications. We propose a new clean-slate method to derive fault models from large numbers of simulation results. For this technology, our method identifies completely new fault models – overflow, pulse-escape, and pattern-sensitive – in addition to the well-known stuck-at faults.

2021-03-15
Hwang, S., Ryu, S..  2020.  Gap between Theory and Practice: An Empirical Study of Security Patches in Solidity. 2020 IEEE/ACM 42nd International Conference on Software Engineering (ICSE). :542–553.
Ethereum, one of the most popular blockchain platforms, provides financial transactions like payments and auctions through smart contracts. Due to the immense interest in smart contracts in academia, the research community of smart contract security has made a significant improvement recently. Researchers have reported various security vulnerabilities in smart contracts, and developed static analysis tools and verification frameworks to detect them. However, it is unclear whether such great efforts from academia has indeed enhanced the security of smart contracts in reality. To understand the security level of smart contracts in the wild, we empirically studied 55,046 real-world Ethereum smart contracts written in Solidity, the most popular programming language used by Ethereum smart contract developers. We first examined how many well-known vulnerabilities the Solidity compiler has patched, and how frequently the Solidity team publishes compiler releases. Unfortunately, we observed that many known vulnerabilities are not yet patched, and some patches are not even sufficient to avoid their target vulnerabilities. Subsequently, we investigated whether smart contract developers use the most recent compiler with vulnerabilities patched. We reported that developers of more than 98% of real-world Solidity contracts still use older compilers without vulnerability patches, and more than 25% of the contracts are potentially vulnerable due to the missing security patches. To understand actual impacts of the missing patches, we manually investigated potentially vulnerable contracts that are detected by our static analyzer and identified common mistakes by Solidity developers, which may cause serious security issues such as financial loss. We detected hundreds of vulnerable contracts and about one fourth of the vulnerable contracts are used by thousands of people. We recommend the Solidity team to make patches that resolve known vulnerabilities correctly, and developers to use the latest Solidity compiler to avoid missing security patches.
2021-03-09
H, R. M., Shrinivasa, R, C., M, D. R., J, A. N., S, K. R. N..  2020.  Biometric Authentication for Safety Lockers Using Cardiac Vectors. 2020 International Conference on Power, Energy, Control and Transmission Systems (ICPECTS). :1—5.

Security has become the vital component of today's technology. People wish to safeguard their valuable items in bank lockers. With growing technology most of the banks have replaced the manual lockers by digital lockers. Even though there are numerous biometric approaches, these are not robust. In this work we propose a new approach for personal biometric identification based on features extracted from ECG.

2021-03-04
Amadori, A., Michiels, W., Roelse, P..  2020.  Automating the BGE Attack on White-Box Implementations of AES with External Encodings. 2020 IEEE 10th International Conference on Consumer Electronics (ICCE-Berlin). :1—6.

Cloud-based payments, virtual car keys, and digital rights management are examples of consumer electronics applications that use secure software. White-box implementations of the Advanced Encryption Standard (AES) are important building blocks of secure software systems, and the attack of Billet, Gilbert, and Ech-Chatbi (BGE) is a well-known attack on such implementations. A drawback from the adversary’s or security tester’s perspective is that manual reverse engineering of the implementation is required before the BGE attack can be applied. This paper presents a method to automate the BGE attack on a class of white-box AES implementations with a specific type of external encoding. The new method was implemented and applied successfully to a CHES 2016 capture the flag challenge.

2021-02-10
Anagandula, K., Zavarsky, P..  2020.  An Analysis of Effectiveness of Black-Box Web Application Scanners in Detection of Stored SQL Injection and Stored XSS Vulnerabilities. 2020 3rd International Conference on Data Intelligence and Security (ICDIS). :40—48.

Black-box web application scanners are used to detect vulnerabilities in the web application without any knowledge of the source code. Recent research had shown their poor performance in detecting stored Cross-Site Scripting (XSS) and stored SQL Injection (SQLI). The detection efficiency of four black-box scanners on two testbeds, Wackopicko and Custom testbed Scanit (obtained from [5]), have been analyzed in this paper. The analysis showed that the scanners need to be improved for better detection of multi-step stored XSS and stored SQLI. This study involves the interaction between the selected scanners and the web application to measure their efficiency of inserting proper attack vectors in appropriate fields. The results of this research paper indicate that there is not much difference in terms of performance between open-source and commercial black-box scanners used in this research. However, it may depend on the policies and trust issues of the companies using them according to their needs. Some of the possible recommendations are provided to improve the detection rate of stored SQLI and stored XSS vulnerabilities in this paper. The study concludes that the state-of-the-art of automated black-box web application scanners in 2020 needs to be improved to detect stored XSS and stored SQLI more effectively.

2020-12-02
Wang, W., Xuan, S., Yang, W., Chen, Y..  2019.  User Credibility Assessment Based on Trust Propagation in Microblog. 2019 Computing, Communications and IoT Applications (ComComAp). :270—275.

Nowadays, Microblog has become an important online social networking platform, and a large number of users share information through Microblog. Many malicious users have released various false news driven by various interests, which seriously affects the availability of Microblog platform. Therefore, the evaluation of Microblog user credibility has become an important research issue. This paper proposes a microblog user credibility evaluation algorithm based on trust propagation. In view of the high consumption and low precision caused by malicious users' attacking algorithms and manual selection of seed sets by establishing false social relationships, this paper proposes two optimization strategies: pruning algorithm based on social activity and similarity and based on The seed node selection algorithm of clustering. The pruning algorithm can trim off the attack edges established by malicious users and normal users. The seed node selection algorithm can efficiently select the highly available seed node set, and finally use the user social relationship graph to perform the two-way propagation trust scoring, so that the low trusted user has a lower trusted score and thus identifies the malicious user. The related experiments verify the effectiveness of the trustworthiness-based user credibility evaluation algorithm in the evaluation of Microblog user credibility.

2020-12-01
Nam, C., Li, H., Li, S., Lewis, M., Sycara, K..  2018.  Trust of Humans in Supervisory Control of Swarm Robots with Varied Levels of Autonomy. 2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC). :825—830.

In this paper, we study trust-related human factors in supervisory control of swarm robots with varied levels of autonomy (LOA) in a target foraging task. We compare three LOAs: manual, mixed-initiative (MI), and fully autonomous LOA. In the manual LOA, the human operator chooses headings for a flocking swarm, issuing new headings as needed. In the fully autonomous LOA, the swarm is redirected automatically by changing headings using a search algorithm. In the mixed-initiative LOA, if performance declines, control is switched from human to swarm or swarm to human. The result of this work extends the current knowledge on human factors in swarm supervisory control. Specifically, the finding that the relationship between trust and performance improved for passively monitoring operators (i.e., improved situation awareness in higher LOAs) is particularly novel in its contradiction of earlier work. We also discover that operators switch the degree of autonomy when their trust in the swarm system is low. Last, our analysis shows that operator's preference for a lower LOA is confirmed for a new domain of swarm control.

2020-10-26
Black, Paul, Gondal, Iqbal, Vamplew, Peter, Lakhotia, Arun.  2019.  Evolved Similarity Techniques in Malware Analysis. 2019 18th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/13th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE). :404–410.

Malware authors are known to reuse existing code, this development process results in software evolution and a sequence of versions of a malware family containing functions that show a divergence from the initial version. This paper proposes the term evolved similarity to account for this gradual divergence of similarity across the version history of a malware family. While existing techniques are able to match functions in different versions of malware, these techniques work best when the version changes are relatively small. This paper introduces the concept of evolved similarity and presents automated Evolved Similarity Techniques (EST). EST differs from existing malware function similarity techniques by focusing on the identification of significantly modified functions in adjacent malware versions and may also be used to identify function similarity in malware samples that differ by several versions. The challenge in identifying evolved malware function pairs lies in identifying features that are relatively invariant across evolved code. The research in this paper makes use of the function call graph to establish these features and then demonstrates the use of these techniques using Zeus malware.

2020-07-27
Liu, Xianyu, Zheng, Min, Pan, Aimin, Lu, Quan.  2018.  Hardening the Core: Understanding and Detection of XNU Kernel Vulnerabilities. 2018 48th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W). :10–13.
The occurrence of security vulnerabilities in kernel, especially for macOS/iOS kernel XNU, has increased rapidly in recent years. Naturally, concerns were raised due to the high risks they would lead to, which in general are much more serious than common application vulnerabilities. However, discovering XNU kernel vulnerabilities is always very challenging, and the main approach in practice is still manual analysis, which obviously is not a scalable method. In this paper, we perform an in-depth empirical study on the 406 published XNU kernel vulnerabilities to identify distinguishing characteristics of them and then leverage the features to guide our vulnerability detection, i.e., locating suspicious functions. To further improve the efficiency of vulnerability detection, we present KInspector, a new and lightweight framework to detect XNU kernel vulnerabilities by leveraging feedback-based fuzzing techniques. We thoroughly evaluate our approach on XNU with various versions, and the results turn out to be quite promising: 21 N/0-day vulnerabilities have been discovered in our experiments.
2020-06-08
Hovhannes, H. Hakobyan, Arman, V. Vardumyan, Harutyun, T. Kostanyan.  2019.  Unit Regression Test Selection According To Different Hashing Algorithms. 2019 IEEE East-West Design Test Symposium (EWDTS). :1–4.
An approach for effective regression test selection is proposed, which minimizes the resource usage and amount of time required for complete testing of new features. Provided are the details of the analysis of hashing algorithms used during implementation in-depth review of the software, together with the results achieved during the testing process.
2020-04-13
Verma, Dinesh, Bertino, Elisa, de Mel, Geeth, Melrose, John.  2019.  On the Impact of Generative Policies on Security Metrics. 2019 IEEE International Conference on Smart Computing (SMARTCOMP). :104–109.
Policy based Security Management in an accepted practice in the industry, and required to simplify the administrative overhead associated with security management in complex systems. However, the growing dynamicity, complexity and scale of modern systems makes it difficult to write the security policies manually. Using AI, we can generate policies automatically. Security policies generated automatically can reduce the manual burden introduced in defining policies, but their impact on the overall security of a system is unclear. In this paper, we discuss the security metrics that can be associated with a system using generative policies, and provide a simple model to determine the conditions under which generating security policies will be beneficial to improve the security of the system. We also show that for some types of security metrics, a system using generative policies can be considered as equivalent to a system using manually defined policies, and the security metrics of the generative policy based system can be mapped to the security metrics of the manual system and vice-versa.
2020-03-23
Pewny, Jannik, Koppe, Philipp, Holz, Thorsten.  2019.  STEROIDS for DOPed Applications: A Compiler for Automated Data-Oriented Programming. 2019 IEEE European Symposium on Security and Privacy (EuroS P). :111–126.
The wide-spread adoption of system defenses such as the randomization of code, stack, and heap raises the bar for code-reuse attacks. Thus, attackers utilize a scripting engine in target programs like a web browser to prepare the code-reuse chain, e.g., relocate gadget addresses or perform a just-in-time gadget search. However, many types of programs do not provide such an execution context that an attacker can use. Recent advances in data-oriented programming (DOP) explored an orthogonal way to abuse memory corruption vulnerabilities and demonstrated that an attacker can achieve Turing-complete computations without modifying code pointers in applications. As of now, constructing DOP exploits requires a lot of manual work-for every combination of application and payload anew. In this paper, we present novel techniques to automate the process of generating DOP exploits. We implemented a compiler called STEROIDS that leverages these techniques and compiles our high-level language SLANG into low-level DOP data structures driving malicious computations at run time. This enables an attacker to specify her intent in an application-and vulnerability-independent manner to maximize reusability. We demonstrate the effectiveness of our techniques and prototype implementation by specifying four programs of varying complexity in SLANG that calculate the Levenshtein distance, traverse a pointer chain to steal a private key, relocate a ROP chain, and perform a JIT-ROP attack. STEROIDS compiles each of those programs to low-level DOP data structures targeted at five different applications including GStreamer, Wireshark and ProFTPd, which have vastly different vulnerabilities and DOP instances. Ultimately, this shows that our compiler is versatile, can be used for both 32-bit and 64-bit applications, works across bug classes, and enables highly expressive attacks without conventional code-injection or code-reuse techniques in applications lacking a scripting engine.
2020-03-09
Calzavara, Stefano, Conti, Mauro, Focardi, Riccardo, Rabitti, Alvise, Tolomei, Gabriele.  2019.  Mitch: A Machine Learning Approach to the Black-Box Detection of CSRF Vulnerabilities. 2019 IEEE European Symposium on Security and Privacy (EuroS P). :528–543.

Cross-Site Request Forgery (CSRF) is one of the oldest and simplest attacks on the Web, yet it is still effective on many websites and it can lead to severe consequences, such as economic losses and account takeovers. Unfortunately, tools and techniques proposed so far to identify CSRF vulnerabilities either need manual reviewing by human experts or assume the availability of the source code of the web application. In this paper we present Mitch, the first machine learning solution for the black-box detection of CSRF vulnerabilities. At the core of Mitch there is an automated detector of sensitive HTTP requests, i.e., requests which require protection against CSRF for security reasons. We trained the detector using supervised learning techniques on a dataset of 5,828 HTTP requests collected on popular websites, which we make available to other security researchers. Our solution outperforms existing detection heuristics proposed in the literature, allowing us to identify 35 new CSRF vulnerabilities on 20 major websites and 3 previously undetected CSRF vulnerabilities on production software already analyzed using a state-of-the-art tool.

2020-02-17
MacDermott, Áine, Lea, Stephen, Iqbal, Farkhund, Idowu, Ibrahim, Shah, Babar.  2019.  Forensic Analysis of Wearable Devices: Fitbit, Garmin and HETP Watches. 2019 10th IFIP International Conference on New Technologies, Mobility and Security (NTMS). :1–6.
Wearable technology has been on an exponential rise and shows no signs of slowing down. One category of wearable technology is Fitness bands, which have the potential to show a user's activity levels and location data. Such information stored in fitness bands is just the beginning of a long trail of evidence fitness bands can store, which represents a huge opportunity to digital forensic practitioners. On the surface of recent work and research in this area, there does not appear to be any similar work that has already taken place on fitness bands and particularly, the devices in this study, a Garmin Forerunner 110, a Fitbit Charge HR and a Generic low-cost HETP fitness tracker. In this paper, we present our analysis of these devices for any possible digital evidence in a forensically sound manner, identifying files of interest and location data on the device. Data accuracy and validity of the evidence is shown, as a test run scenario wearing all of the devices allowed for data comparison analysis.
2019-02-14
Xu, Z., Shi, C., Cheng, C. C., Gong, N. Z., Guan, Y..  2018.  A Dynamic Taint Analysis Tool for Android App Forensics. 2018 IEEE Security and Privacy Workshops (SPW). :160-169.

The plethora of mobile apps introduce critical challenges to digital forensics practitioners, due to the diversity and the large number (millions) of mobile apps available to download from Google play, Apple store, as well as hundreds of other online app stores. Law enforcement investigators often find themselves in a situation that on the seized mobile phone devices, there are many popular and less-popular apps with interface of different languages and functionalities. Investigators would not be able to have sufficient expert-knowledge about every single app, sometimes nor even a very basic understanding about what possible evidentiary data could be discoverable from these mobile devices being investigated. Existing literature in digital forensic field showed that most such investigations still rely on the investigator's manual analysis using mobile forensic toolkits like Cellebrite and Encase. The problem with such manual approaches is that there is no guarantee on the completeness of such evidence discovery. Our goal is to develop an automated mobile app analysis tool to analyze an app and discover what types of and where forensic evidentiary data that app generate and store locally on the mobile device or remotely on external 3rd-party server(s). With the app analysis tool, we will build a database of mobile apps, and for each app, we will create a list of app-generated evidence in terms of data types, locations (and/or sequence of locations) and data format/syntax. The outcome from this research will help digital forensic practitioners to reduce the complexity of their case investigations and provide a better completeness guarantee of evidence discovery, thereby deliver timely and more complete investigative results, and eventually reduce backlogs at crime labs. In this paper, we will present the main technical approaches for us to implement a dynamic Taint analysis tool for Android apps forensics. With the tool, we have analyzed 2,100 real-world Android apps. For each app, our tool produces the list of evidentiary data (e.g., GPS locations, device ID, contacts, browsing history, and some user inputs) that the app could have collected and stored on the devices' local storage in the forms of file or SQLite database. We have evaluated our tool using both benchmark apps and real-world apps. Our results demonstrated that the initial success of our tool in accurately discovering the evidentiary data.

2018-03-05
Mfula, H., Nurminen, J. K..  2017.  Adaptive Root Cause Analysis for Self-Healing in 5G Networks. 2017 International Conference on High Performance Computing Simulation (HPCS). :136–143.

Root cause analysis (RCA) is a common and recurring task performed by operators of cellular networks. It is done mainly to keep customers satisfied with the quality of offered services and to maximize return on investment (ROI) by minimizing and where possible eliminating the root causes of faults in cellular networks. Currently, the actual detection and diagnosis of faults or potential faults is still a manual and slow process often carried out by network experts who manually analyze and correlate various pieces of network data such as, alarms, call traces, configuration management (CM) and key performance indicator (KPI) data in order to come up with the most probable root cause of a given network fault. In this paper, we propose an automated fault detection and diagnosis solution called adaptive root cause analysis (ARCA). The solution uses measurements and other network data together with Bayesian network theory to perform automated evidence based RCA. Compared to the current common practice, our solution is faster due to automation of the entire RCA process. The solution is also cheaper because it needs fewer or no personnel in order to operate and it improves efficiency through domain knowledge reuse during adaptive learning. As it uses a probabilistic Bayesian classifier, it can work with incomplete data and it can handle large datasets with complex probability combinations. Experimental results from stratified synthesized data affirmatively validate the feasibility of using such a solution as a key part of self-healing (SH) especially in emerging self-organizing network (SON) based solutions in LTE Advanced (LTE-A) and 5G.

2018-02-21
Jiang, Z., Zhou, A., Liu, L., Jia, P., Liu, L., Zuo, Z..  2017.  CrackDex: Universal and automatic DEX extraction method. 2017 7th IEEE International Conference on Electronics Information and Emergency Communication (ICEIEC). :53–60.

With Android application packing technology evolving, there are more and more ways to harden APPs. Manually unpacking APPs becomes more difficult as the time needed for analyzing increase exponentially. At the beginning, the packing technology is designed to prevent APPs from being easily decompiled, tampered and re-packed. But unfortunately, many malicious APPs start to use packing service to protect themselves. At present, most of the antivirus software focus on APPs that are unpacked, which means if malicious APPs apply the packing service, they can easily escape from a lot of antivirus software. Therefore, we should not only emphasize the importance of packing, but also concentrate on the unpacking technology. Only by doing this can we protect the normal APPs, and not miss any harmful APPs at the same time. In this paper, we first systematically study a lot of DEX packing and unpacking technologies, then propose and develop a universal unpacking system, named CrackDex, which is capable of extracting the original DEX file from the packed APP. We propose three core technologies: simulation execution, DEX reassembling, and DEX restoration, to get the unpacked DEX file. CrackDex is a part of the Dalvik virtual machine, and it monitors the execution of functions to locate the unpacking point in the portable interpreter, then launches the simulation execution, collects the data of original DEX file through corresponding structure pointer, finally fulfills the unpacking process by reassembling the data collected. The results of our experiments show that CrackDex can be used to effectively unpack APPs that are packed by packing service in a universal approach without any other knowledge of packing service.

2017-03-08
Kesiman, M. W. A., Prum, S., Sunarya, I. M. G., Burie, J. C., Ogier, J. M..  2015.  An analysis of ground truth binarized image variability of palm leaf manuscripts. 2015 International Conference on Image Processing Theory, Tools and Applications (IPTA). :229–233.

As a very valuable cultural heritage, palm leaf manuscripts offer a new challenge in document analysis system due to the specific characteristics on physical support of the manuscript. With the aim of finding an optimal binarization method for palm leaf manuscript images, creating a new ground truth binarized image is a necessary step in document analysis of palm leaf manuscript. But, regarding to the human intervention in ground truthing process, an important remark about the subjectivity effect on the construction of ground truth binarized image has been analysed and reported. In this paper, we present an experiment in a real condition to analyse the existance of human subjectivity on the construction of ground truth binarized image of palm leaf manuscript images and to measure quantitatively the ground truth variability with several binarization evaluation metrics.

2017-02-27
Mulcahy, J. J., Huang, S..  2015.  An autonomic approach to extend the business value of a legacy order fulfillment system. 2015 Annual IEEE Systems Conference (SysCon) Proceedings. :595–600.

In the modern retailing industry, many enterprise resource planning (ERP) systems are considered legacy software systems that have become too expensive to replace and too costly to re-engineer. Countering the need to maintain and extend the business value of these systems is the need to do so in the simplest, cheapest, and least risky manner available. There are a number of approaches used by software engineers to mitigate the negative impact of evolving a legacy systems, including leveraging service-oriented architecture to automate manual tasks previously performed by humans. A relatively recent approach in software engineering focuses upon implementing self-managing attributes, or “autonomic” behavior in software applications and systems of applications in order to reduce or eliminate the need for human monitoring and intervention. Entire systems can be autonomic or they can be hybrid systems that implement one or more autonomic components to communicate with external systems. In this paper, we describe a commercial development project in which a legacy multi-channel commerce enterprise resource planning system was extended with service-oriented architecture an autonomic control loop design to communicate with an external third-party security screening provider. The goal was to reduce the cost of the human labor necessary to screen an ever-increasing volume of orders and to reduce the potential for human error in the screening process. The solution automated what was previously an inefficient, incomplete, and potentially error-prone manual process by inserting a new autonomic software component into the existing order fulfillment workflow.

2015-05-06
Meng Zhang, Bingham, J.D., Erickson, J., Sorin, D.J..  2014.  PVCoherence: Designing flat coherence protocols for scalable verification. High Performance Computer Architecture (HPCA), 2014 IEEE 20th International Symposium on. :392-403.

The goal of this work is to design cache coherence protocols with many cores that can be verified with state-of-the-art automated verification methodologies. In particular, we focus on flat (non-hierarchical) coherence protocols, and we use a mostly-automated methodology based on parametric verification (PV). We propose several design guidelines that architects should follow if they want to design protocols that can be parametrically verified. We experimentally evaluate performance, storage overhead, and scalability of a protocol verified with PV compared to a highly optimized protocol that cannot be verified with PV.

2015-05-05
Bhandari, P., Gujral, M.S..  2014.  Ontology based approach for perception of network security state. Engineering and Computational Sciences (RAECS), 2014 Recent Advances in. :1-6.

This paper presents an ontological approach to perceive the current security status of the network. Computer network is a dynamic entity whose state changes with the introduction of new services, installation of new network operating system, and addition of new hardware components, creation of new user roles and by attacks from various actors instigated by aggressors. Various security mechanisms employed in the network does not give the complete picture of security of complete network. In this paper we have proposed taxonomy and ontology which may be used to infer impact of various events happening in the network on security status of the network. Vulnerability, Network and Attack are the main taxonomy classes in the ontology. Vulnerability class describes various types of vulnerabilities in the network which may in hardware components like storage devices, computing devices or networks devices. Attack class has many subclasses like Actor class which is entity executing the attack, Goal class describes goal of the attack, Attack mechanism class defines attack methodology, Scope class describes size and utility of the target, Automation level describes the automation level of the attack Evaluation of security status of the network is required for network security situational awareness. Network class has network operating system, users, roles, hardware components and services as its subclasses. Based on this taxonomy ontology has been developed to perceive network security status. Finally a framework, which uses this ontology as knowledgebase has been proposed.