Visible to the public Biblio

Filters: Keyword is information forensics  [Clear All Filters]
2019-05-08
Xiang, Jie, Chen, Long.  2018.  A Method of Docker Container Forensics Based on API. Proceedings of the 2Nd International Conference on Cryptography, Security and Privacy. :159–164.
As one of the main technologies supporting cloud computing virtualization, Docker is featured in its fast and lightweight virtualization which has been adopted by numerous platform-as-a-service (PaaS) systems, but forensics research for Docker has not been paid the corresponding attention yet. Docker exists to store and distribute illegal information as a carrier for initiating attacks like traditional cloud services. The paper explains Docker service principles and structural features, and analyzing the model and method of forensics in related cloud environment, then proposes a Docker container forensics solution based on the Docker API. In this paper, Docker APIs realize the derivation of the Docker container instances, copying and back-up of the container data volume, extraction of the key evidence data, such as container log information, configuration information and image information, thus conducts localized fixed forensics to volatile evidence and data in the Docker service container. Combined with digital signatures and digital encryption technology to achieve the integrity of the original evidence data protection.
Richter, Timo, Escher, Stephan, Schönfeld, Dagmar, Strufe, Thorsten.  2018.  Forensic Analysis and Anonymisation of Printed Documents. Proceedings of the 6th ACM Workshop on Information Hiding and Multimedia Security. :127–138.
Contrary to popular belief, the paperless office has not yet established itself. Printer forensics is therefore still an important field today to protect the reliability of printed documents or to track criminals. An important task of this is to identify the source device of a printed document. There are many forensic approaches that try to determine the source device automatically and with commercially available recording devices. However, it is difficult to find intrinsic signatures that are robust against a variety of influences of the printing process and at the same time can identify the specific source device. In most cases, the identification rate only reaches up to the printer model. For this reason we reviewed document colour tracking dots, an extrinsic signature embedded in nearly all modern colour laser printers. We developed a refined and generic extraction algorithm, found a new tracking dot pattern and decoded pattern information. Through out we propose to reuse document colour tracking dots, in combination with passive printer forensic methods. From privacy perspective we additional investigated anonymization approaches to defeat arbitrary tracking. Finally we propose our toolkitdeda which implements the entire workflow of extracting, analysing and anonymisation of a tracking dot pattern.
Balogun, A. M., Zuva, T..  2018.  Criminal Profiling in Digital Forensics: Assumptions, Challenges and Probable Solution. 2018 International Conference on Intelligent and Innovative Computing Applications (ICONIC). :1–7.

Cybercrime has been regarded understandably as a consequent compromise that follows the advent and perceived success of the computer and internet technologies. Equally effecting the privacy, trust, finance and welfare of the wealthy and low-income individuals and organizations, this menace has shown no indication of slowing down. Reports across the world have consistently shown exponential increase in the numbers and costs of cyber-incidents, and more worriedly low conviction rates of cybercriminals, over the years. Stakeholders increasingly explore ways to keep up with containing cyber-incidents by devising tools and techniques to increase the overall efficiency of investigations, but the gap keeps getting wider. However, criminal profiling - an investigative technique that has been proven to provide accurate and valuable directions to traditional crime investigations - has not seen a widespread application, including a formal methodology, to cybercrime investigations due to difficulties in its seamless transference. This paper, in a bid to address this problem, seeks to preliminarily identify the exact benefits criminal profiling has brought to successful traditional crime investigations and the benefits it can translate to cybercrime investigations, identify the challenges posed by the cyber-scene to its implementation in cybercrime investigations, and proffer a practicable solution.

Ning, W., Zhi-Jun, L..  2018.  A Layer-Built Method to the Relevancy of Electronic Evidence. 2018 2nd IEEE Advanced Information Management,Communicates,Electronic and Automation Control Conference (IMCEC). :416–420.

T138 combat cyber crimes, electronic evidence have played an increasing role, but in judicial practice the electronic evidence were not highly applied because of the natural contradiction between the epistemic uncertainty of electronic evidence and the principle of discretionary evidence of judge in the court. in this paper, we put forward a layer-built method to analyze the relevancy of electronic evidence, and discussed their analytical process combined with the case study. The initial practice shows the model is feasible and has a consulting value in analyzing the relevancy of electronic evidence.

Ölvecký, M., Gabriška, D..  2018.  Wiping Techniques and Anti-Forensics Methods. 2018 IEEE 16th International Symposium on Intelligent Systems and Informatics (SISY). :000127–000132.

This paper presents a theoretical background of main research activity focused on the evaluation of wiping/erasure standards which are mostly implemented in specific software products developed and programming for data wiping. The information saved in storage devices often consists of metadata and trace data. Especially but not only these kinds of data are very important in the process of forensic analysis because they sometimes contain information about interconnection on another file. Most people saving their sensitive information on their local storage devices and later they want to secure erase these files but usually there is a problem with this operation. Secure file destruction is one of many Anti-forensics methods. The outcome of this paper is to define the future research activities focused on the establishment of the suitable digital environment. This environment will be prepared for testing and evaluating selected wiping standards and appropriate eraser software.

Barni, M., Stamm, M. C., Tondi, B..  2018.  Adversarial Multimedia Forensics: Overview and Challenges Ahead. 2018 26th European Signal Processing Conference (EUSIPCO). :962–966.

In recent decades, a significant research effort has been devoted to the development of forensic tools for retrieving information and detecting possible tampering of multimedia documents. A number of counter-forensic tools have been developed as well in order to impede a correct analysis. Such tools are often very effective due to the vulnerability of multimedia forensics tools, which are not designed to work in an adversarial environment. In this scenario, developing forensic techniques capable of granting good performance even in the presence of an adversary aiming at impeding the forensic analysis, is becoming a necessity. This turns out to be a difficult task, given the weakness of the traces the forensic analysis usually relies on. The goal of this paper is to provide an overview of the advances made over the last decade in the field of adversarial multimedia forensics. We first consider the view points of the forensic analyst and the attacker independently, then we review some of the attempts made to simultaneously take into account both perspectives by resorting to game theory. Eventually, we discuss the hottest open problems and outline possible paths for future research.

Makrushin, Andrey, Kraetzer, Christian, Neubert, Tom, Dittmann, Jana.  2018.  Generalized Benford's Law for Blind Detection of Morphed Face Images. Proceedings of the 6th ACM Workshop on Information Hiding and Multimedia Security. :49–54.
A morphed face image in a photo ID is a serious threat to image-based user verification enabling that multiple persons could be matched with the same document. The application of machine-readable travel documents (MRTD) at automated border control (ABC) gates is an example of a verification scenario that is very sensitive to this kind of fraud. Detection of morphed face images prior to face matching is, therefore, indispensable for effective border security. We introduce the face morphing detection approach based on fitting a logarithmic curve to nine Benford features extracted from quantized DCT coefficients of JPEG compressed original and morphed face images. We separately study the parameters of the logarithmic curve in face and background regions to establish the traces imposed by the morphing process. The evaluation results show that a single parameter of the logarithmic curve may be sufficient to clearly separate morphed and original images.
Chen, Yifang, Kang, Xiangui, Wang, Z. Jane, Zhang, Qiong.  2018.  Densely Connected Convolutional Neural Network for Multi-purpose Image Forensics Under Anti-forensic Attacks. Proceedings of the 6th ACM Workshop on Information Hiding and Multimedia Security. :91–96.

Multiple-purpose forensics has been attracting increasing attention worldwide. However, most of the existing methods based on hand-crafted features often require domain knowledge and expensive human labour and their performances can be affected by factors such as image size and JPEG compression. Furthermore, many anti-forensic techniques have been applied in practice, making image authentication more difficult. Therefore, it is of great importance to develop methods that can automatically learn general and robust features for image operation detectors with the capability of countering anti-forensics. In this paper, we propose a new convolutional neural network (CNN) approach for multi-purpose detection of image manipulations under anti-forensic attacks. The dense connectivity pattern, which has better parameter efficiency than the traditional pattern, is explored to strengthen the propagation of general features related to image manipulation detection. When compared with three state-of-the-art methods, experiments demonstrate that the proposed CNN architecture can achieve a better performance (i.e., with a 11% improvement in terms of detection accuracy under anti-forensic attacks). The proposed method can also achieve better robustness against JPEG compression with maximum improvement of 13% on accuracy under low-quality JPEG compression.

Popov, Oliver, Bergman, Jesper, Valassi, Christian.  2018.  A Framework for a Forensically Sound Harvesting the Dark Web. Proceedings of the Central European Cybersecurity Conference 2018. :13:1–13:7.
The generative and transformative nature of the Internet which has become a synonym for the infrastructure of the contemporary digital society, is also a place where there are unsavoury and illegal activities such as fraud, human trafficking, exchange of control substances, arms smuggling, extremism, and terrorism. The legitimate concerns such as anonymity and privacy are used for proliferation of nefarious deeds in parts of the Internet termed as a deep web and a dark web. The cryptographic and anonymity mechanisms employed by the dark web miscreants create serious problems for the law enforcement agencies and other legal institutions to monitor, control, investigate, prosecute, and prevent the range of criminal events which should not be part of the Internet, and the human society in general. The paper describes the research on developing a framework for identifying, collecting, analysing, and reporting information from the dark web in a forensically sound manner. The framework should provide the fundamentals for creating a real-life system that could be used as a tool by law enforcement institutions, digital forensics researchers and practitioners to explore and study illicit actions and their consequences on the dark web. The design science paradigms is used to develop the framework, while international security and forensic experts are behind the ex-ante evaluation of the basic components and their functionality, the architecture, and the organization of the system. Finally, we discuss the future work concerning the implementation of the framework along with the inducement of some intelligent modules that should empower the tool with adaptability, effectiveness, and efficiency.
Kieseberg, Peter, Schrittwieser, Sebastian, Weippl, Edgar.  2018.  Structural Limitations of B+-Tree Forensics. Proceedings of the Central European Cybersecurity Conference 2018. :9:1–9:4.
Despite the importance of databases in virtually all data driven applications, database forensics is still not the thriving topic it ought to be. Many database management systems (DBMSs) structure the data in the form of trees, most notably B+-Trees. Since the tree structure is depending on the characteristics of the INSERT-order, it can be used in order to generate information on later manipulations, as was shown in a previously published approach. In this work we analyse this approach and investigate, whether it is possible to generalize it to detect DELETE-operations within general INSERT-only trees. We subsequently prove that almost all forms of B+-Trees can be constructed solely by using INSERT-operations, i.e. that this approach cannot be used to prove the existence of DELETE-operations in the past.
2018-03-05
Pasquier, Thomas, Han, Xueyuan, Goldstein, Mark, Moyer, Thomas, Eyers, David, Seltzer, Margo, Bacon, Jean.  2017.  Practical Whole-System Provenance Capture. Proceedings of the 2017 Symposium on Cloud Computing. :405–418.

Data provenance describes how data came to be in its present form. It includes data sources and the transformations that have been applied to them. Data provenance has many uses, from forensics and security to aiding the reproducibility of scientific experiments. We present CamFlow, a whole-system provenance capture mechanism that integrates easily into a PaaS offering. While there have been several prior whole-system provenance systems that captured a comprehensive, systemic and ubiquitous record of a system's behavior, none have been widely adopted. They either A) impose too much overhead, B) are designed for long-outdated kernel releases and are hard to port to current systems, C) generate too much data, or D) are designed for a single system. CamFlow addresses these shortcoming by: 1) leveraging the latest kernel design advances to achieve efficiency; 2) using a self-contained, easily maintainable implementation relying on a Linux Security Module, NetFilter, and other existing kernel facilities; 3) providing a mechanism to tailor the captured provenance data to the needs of the application; and 4) making it easy to integrate provenance across distributed systems. The provenance we capture is streamed and consumed by tenant-built auditor applications. We illustrate the usability of our implementation by describing three such applications: demonstrating compliance with data regulations; performing fault/intrusion detection; and implementing data loss prevention. We also show how CamFlow can be leveraged to capture meaningful provenance without modifying existing applications.

Kim, Hyunsoo, Jeon, Youngbae, Yoon, Ji Won.  2017.  Construction of a National Scale ENF Map Using Online Multimedia Data. Proceedings of the 2017 ACM on Conference on Information and Knowledge Management. :19–28.

The frequency of power distribution networks in a power grid is called electrical network frequency (ENF). Because it provides the spatio-temporal changes of the power grid in a particular location, ENF is used in many application domains including the prediction of grid instability and blackouts, detection of system breakup, and even digital forensics. In order to build high performing applications and systems, it is necessary to capture a large-scale nationwide or worldwide ENF map. Consequently, many studies have been conducted on the distribution of specialized physical devices that capture the ENF signals. However, this approach is not practical because it requires significant effort from design to setup, moreover, it has a limitation in its efficiency to monitor and stably retain the collection equipment distributed throughout the world. Furthermore, this approach requires a significant budget. In this paper, we proposed a novel approach to constructing the worldwide ENF map by analyzing streaming data obtained by online multimedia services, such as "Youtube", "Earthcam", and "Ustream" instead of expensive specialized hardware. However, extracting accurate ENF from the streaming data is not a straightforward process because multimedia has its own noise and uncertainty. By applying several signal processing techniques, we can reduce noise and uncertainty, and improve the quality of the restored ENF. For the evaluation of this process, we compared the performance between the ENF signals restored by our proposed approach and collected by the frequency disturbance recorder (FDR) from FNET/GridEye. The experimental results show that our proposed approach outperforms in stable acquisition and management of the ENF signals compared to the conventional approach.

Celik, Z. Berkay, McDaniel, Patrick, Izmailov, Rauf.  2017.  Feature Cultivation in Privileged Information-Augmented Detection. Proceedings of the 3rd ACM on International Workshop on Security And Privacy Analytics. :73–80.

Modern detection systems use sensor outputs available in the deployment environment to probabilistically identify attacks. These systems are trained on past or synthetic feature vectors to create a model of anomalous or normal behavior. Thereafter, run-time collected sensor outputs are compared to the model to identify attacks (or the lack of attack). While this approach to detection has been proven to be effective in many environments, it is limited to training on only features that can be reliably collected at detection time. Hence, they fail to leverage the often vast amount of ancillary information available from past forensic analysis and post-mortem data. In short, detection systems do not train (and thus do not learn from) features that are unavailable or too costly to collect at run-time. Recent work proposed an alternate model construction approach that integrates forensic "privilege" information–-features reliably available at training time, but not at run-time–-to improve accuracy and resilience of detection systems. In this paper, we further evaluate two of proposed techniques to model training with privileged information: knowledge transfer, and model influence. We explore the cultivation of privileged features, the efficiency of those processes and their influence on the detection accuracy. We observe that the improved integration of privileged features makes the resulting detection models more accurate. Our evaluation shows that use of privileged information leads to up to 8.2% relative decrease in detection error for fast-flux bot detection over a system with no privileged information, and 5.5% for malware classification.

Bhattacharjee, Shameek, Thakur, Aditya, Silvestri, Simone, Das, Sajal K..  2017.  Statistical Security Incident Forensics Against Data Falsification in Smart Grid Advanced Metering Infrastructure. Proceedings of the Seventh ACM on Conference on Data and Application Security and Privacy. :35–45.

Compromised smart meters reporting false power consumption data in Advanced Metering Infrastructure (AMI) may have drastic consequences on a smart grid's operations. Most existing works only deal with electricity theft from customers. However, several other types of data falsification attacks are possible, when meters are compromised by organized rivals. In this paper, we first propose a taxonomy of possible data falsification strategies such as additive, deductive, camouflage and conflict, in AMI micro-grids. Then, we devise a statistical anomaly detection technique to identify the incidence of proposed attack types, by studying their impact on the observed data. Subsequently, a trust model based on Kullback-Leibler divergence is proposed to identify compromised smart meters for additive and deductive attacks. The resultant detection rates and false alarms are minimized through a robust aggregate measure that is calculated based on the detected attack type and successfully discriminating legitimate changes from malicious ones. For conflict and camouflage attacks, a generalized linear model and Weibull function based kernel trick is used over the trust score to facilitate more accurate classification. Using real data sets collected from AMI, we investigate several trade-offs that occur between attacker's revenue and costs, as well as the margin of false data and fraction of compromised nodes. Experimental results show that our model has a high true positive detection rate, while the average false alarm rate is just 8%, for most practical attack strategies, without depending on the expensive hardware based monitoring.

Alruban, Abdulrahman, Clarke, Nathan, Li, Fudong, Furnell, Steven.  2017.  Insider Misuse Attribution Using Biometrics. Proceedings of the 12th International Conference on Availability, Reliability and Security. :42:1–42:7.

Insider misuse has become a major risk for many organizations. One of the most common forms of misuses is data leakage. Such threats have turned into a real challenge to overcome and mitigate. Whilst prevention is important, incidents will inevitably occur and as such attribution of the leakage is key to ensuring appropriate recourse. Although digital forensics capability has grown rapidly in the process of analyzing the digital evidences, a key barrier is often being able to associate the evidence back to an individual who leaked the data. Stolen credentials and the Trojan defense are two commonly cited arguments used to complicate the issue of attribution. Furthermore, the use of a digital certificate or user ID would only associate to the account not to the individual. This paper proposes a more proactive model whereby a user's biometric information is transparently captured (during normal interactions) and embedding within the digital objects they interact with (thereby providing a direct link between the last user using any document or object). An investigation into the possibility of embedding individuals' biometric signals into image files is presented, with a particular focus upon the ability to recover the biometric information under varying degrees of modification attack. The experimental results show that even when the watermarked object is significantly modified (e.g. only 25% of the image is available) it is still possible to recover those embedded biometric information.

Mayer, Felix, Steinebach, Martin.  2017.  Forensic Image Inspection Assisted by Deep Learning. Proceedings of the 12th International Conference on Availability, Reliability and Security. :53:1–53:9.

Investigations on the charge of possessing child pornography usually require manual forensic image inspection in order to collect evidence. When storage devices are confiscated, law enforcement authorities are hence often faced with massive image datasets which have to be screened within a limited time frame. As the ability to concentrate and time are highly limited factors of a human investigator, we believe that intelligent algorithms can effectively assist the inspection process by rearranging images based on their content. Thus, more relevant images can be discovered within a shorter time frame, which is of special importance in time-critical investigations of triage character. While currently employed techniques are based on black- and whitelisting of known images, we propose to use deep learning algorithms trained for the detection of pornographic imagery, as they are able to identify new content. In our approach, we evaluated three state-of-the-art neural networks for the detection of pornographic images and employed them to rearrange simulated datasets of 1 million images containing a small fraction of pornographic content. The rearrangement of images according to their content allows a much earlier detection of relevant images during the actual manual inspection of the dataset, especially when the percentage of relevant images is low. With our approach, the first relevant image could be discovered between positions 8 and 9 in the rearranged list on average. Without using our approach of image rearrangement, the first relevant image was discovered at position 1,463 on average.

Pasquini, Cecilia, Böhme, Rainer.  2017.  Information-Theoretic Bounds of Resampling Forensics: New Evidence for Traces Beyond Cyclostationarity. Proceedings of the 5th ACM Workshop on Information Hiding and Multimedia Security. :3–14.

Although several methods have been proposed for the detection of resampling operations in multimedia signals and the estimation of the resampling factor, the fundamental limits for this forensic task leave open research questions. In this work, we explore the effects that a downsampling operation introduces in the statistics of a 1D signal as a function of the parameters used. We quantify the statistical distance between an original signal and its downsampled version by means of the Kullback-Leibler Divergence (KLD) in case of a wide-sense stationary 1st-order autoregressive signal model. Values of the KLD are derived for different signal parameters, resampling factors and interpolation kernels, thus predicting the achievable hypothesis distinguishability in each case. Our analysis reveals unexpected detectability in case of strong downsampling due to the local correlation structure of the original signal. Moreover, since existing detection methods generally leverage the cyclostationarity of resampled signals, we also address the case where the autocovariance values are estimated directly by means of the sample autocovariance from the signal under investigation. Under the considered assumptions, the Wishart distribution models the sample covariance matrix of a signal segment and the KLD under different hypotheses is derived.

Zhan, Yifeng, Chen, Yifang, Zhang, Qiong, Kang, Xiangui.  2017.  Image Forensics Based on Transfer Learning and Convolutional Neural Network. Proceedings of the 5th ACM Workshop on Information Hiding and Multimedia Security. :165–170.

There have been a growing number of interests in using the convolutional neural network(CNN) in image forensics, where some excellent methods have been proposed. Training the randomly initialized model from scratch needs a big amount of training data and computational time. To solve this issue, we present a new method of training an image forensic model using prior knowledge transferred from the existing steganalysis model. We also find out that CNN models tend to show poor performance when tested on a different database. With knowledge transfer, we are able to easily train an excellent model for a new database with a small amount of training data from the new database. Performance of our models are evaluated on Bossbase and BOW by detecting five forensic types, including median filtering, resampling, JPEG compression, contrast enhancement and additive Gaussian noise. Through a series of experiments, we demonstrate that our proposed method is very effective in two scenario mentioned above, and our method based on transfer learning can greatly accelerate the convergence of CNN model. The results of these experiments show that our proposed method can detect five different manipulations with an average accuracy of 97.36%.

Ji, Yang, Lee, Sangho, Downing, Evan, Wang, Weiren, Fazzini, Mattia, Kim, Taesoo, Orso, Alessandro, Lee, Wenke.  2017.  RAIN: Refinable Attack Investigation with On-Demand Inter-Process Information Flow Tracking. Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. :377–390.

As modern attacks become more stealthy and persistent, detecting or preventing them at their early stages becomes virtually impossible. Instead, an attack investigation or provenance system aims to continuously monitor and log interesting system events with minimal overhead. Later, if the system observes any anomalous behavior, it analyzes the log to identify who initiated the attack and which resources were affected by the attack and then assess and recover from any damage incurred. However, because of a fundamental tradeoff between log granularity and system performance, existing systems typically record system-call events without detailed program-level activities (e.g., memory operation) required for accurately reconstructing attack causality or demand that every monitored program be instrumented to provide program-level information. To address this issue, we propose RAIN, a Refinable Attack INvestigation system based on a record-replay technology that records system-call events during runtime and performs instruction-level dynamic information flow tracking (DIFT) during on-demand process replay. Instead of replaying every process with DIFT, RAIN conducts system-call-level reachability analysis to filter out unrelated processes and to minimize the number of processes to be replayed, making inter-process DIFT feasible. Evaluation results show that RAIN effectively prunes out unrelated processes and determines attack causality with negligible false positive rates. In addition, the runtime overhead of RAIN is similar to existing system-call level provenance systems and its analysis overhead is much smaller than full-system DIFT.

Zia, Tanveer, Liu, Peng, Han, Weili.  2017.  Application-Specific Digital Forensics Investigative Model in Internet of Things (IoT). Proceedings of the 12th International Conference on Availability, Reliability and Security. :55:1–55:7.

Besides its enormous benefits to the industry and community the Internet of Things (IoT) has introduced unique security challenges to its enablers and adopters. As the trend in cybersecurity threats continue to grow, it is likely to influence IoT deployments. Therefore it is eminent that besides strengthening the security of IoT systems we develop effective digital forensics techniques that when breaches occur we can track the sources of attacks and bring perpetrators to the due process with reliable digital evidence. The biggest challenge in this regard is the heterogeneous nature of devices in IoT systems and lack of unified standards. In this paper we investigate digital forensics from IoT perspectives. We argue that besides traditional digital forensics practices it is important to have application-specific forensics in place to ensure collection of evidence in context of specific IoT applications. We consider top three IoT applications and introduce a model which deals with not just traditional forensics but is applicable in digital as well as application-specific forensics process. We believe that the proposed model will enable collection, examination, analysis and reporting of forensically sound evidence in an IoT application-specific digital forensics investigation.

Gowda, Thamme, Hundman, Kyle, Mattmann, Chris A..  2017.  An Approach for Automatic and Large Scale Image Forensics. Proceedings of the 2Nd International Workshop on Multimedia Forensics and Security. :16–20.

This paper describes the applications of deep learning-based image recognition in the DARPA Memex program and its repository of 1.4 million weapons-related images collected from the Deep web. We develop a fast, efficient, and easily deployable framework for integrating Google's Tensorflow framework with Apache Tika for automatically performing image forensics on the Memex data. Our framework and its integration are evaluated qualitatively and quantitatively and our work suggests that automated, large-scale, and reliable image classification and forensics can be widely used and deployed in bulk analysis for answering domain-specific questions.

2017-05-30
Vaughn, Jr., Rayford B., Morris, Tommy.  2016.  Addressing Critical Industrial Control System Cyber Security Concerns via High Fidelity Simulation. Proceedings of the 11th Annual Cyber and Information Security Research Conference. :12:1–12:4.

This paper outlines a set of 10 cyber security concerns associated with Industrial Control Systems (ICS). The concerns address software and hardware development, implementation, and maintenance practices, supply chain assurance, the need for cyber forensics in ICS, a lack of awareness and training, and finally, a need for test beds which can be used to address the first 9 cited concerns. The concerns documented in this paper were developed based on the authors' combined experience conducting research in this field for the US Department of Homeland Security, the National Science Foundation, and the Department of Defense. The second half of this paper documents a virtual test bed platform which is offered as a tool to address the concerns listed in the first half of the paper. The paper discusses various types of test beds proposed in literature for ICS research, provides an overview of the virtual test bed platform developed by the authors, and lists future works required to extend the existing test beds to serve as a development platform.

Lacroix, Jesse, El-Khatib, Khalil, Akalu, Rajen.  2016.  Vehicular Digital Forensics: What Does My Vehicle Know About Me? Proceedings of the 6th ACM Symposium on Development and Analysis of Intelligent Vehicular Networks and Applications. :59–66.

A major component of modern vehicles is the infotainment system, which interfaces with its drivers and passengers. Other mobile devices, such as handheld phones and laptops, can relay information to the embedded infotainment system through Bluetooth and vehicle WiFi. The ability to extract information from these systems would help forensic analysts determine the general contents that is stored in an infotainment system. Based off the data that is extracted, this would help determine what stored information is relevant to law enforcement agencies and what information is non-essential when it comes to solving criminal activities relating to the vehicle itself. This would overall solidify the Intelligent Transport System and Vehicular Ad Hoc Network infrastructure in combating crime through the use of vehicle forensics. Additionally, determining the content of these systems will allow forensic analysts to know if they can determine anything about the end-user directly and/or indirectly.

Gu, Yufei, Lin, Zhiqiang.  2016.  Derandomizing Kernel Address Space Layout for Memory Introspection and Forensics. Proceedings of the Sixth ACM Conference on Data and Application Security and Privacy. :62–72.

Modern OS kernels including Windows, Linux, and Mac OS all have adopted kernel Address Space Layout Randomization (ASLR), which shifts the base address of kernel code and data into different locations in different runs. Consequently, when performing introspection or forensic analysis of kernel memory, we cannot use any pre-determined addresses to interpret the kernel events. Instead, we must derandomize the address space layout and use the new addresses. However, few efforts have been made to derandomize the kernel address space and yet there are many questions left such as which approach is more efficient and robust. Therefore, we present the first systematic study of how to derandomize a kernel when given a memory snapshot of a running kernel instance. Unlike the derandomization approaches used in traditional memory exploits in which only remote access is available, with introspection and forensics applications, we can use all the information available in kernel memory to generate signatures and derandomize the ASLR. In other words, there exists a large volume of solutions for this problem. As such, in this paper we examine a number of typical approaches to generate strong signatures from both kernel code and data based on the insight of how kernel code and data is updated, and compare them from efficiency (in terms of simplicity, speed etc.) and robustness (e.g., whether the approach is hard to be evaded or forged) perspective. In particular, we have designed four approaches including brute-force code scanning, patched code signature generation, unpatched code signature generation, and read-only pointer based approach, according to the intrinsic behavior of kernel code and data with respect to kernel ASLR. We have gained encouraging results for each of these approaches and the corresponding experimental results are reported in this paper.

Jadhao, Ankita R., Agrawal, Avinash J..  2016.  A Digital Forensics Investigation Model for Social Networking Site. Proceedings of the Second International Conference on Information and Communication Technology for Competitive Strategies. :130:1–130:4.

Social Networking is fundamentally shifting the way we communicate, sharing idea and form opinions. All people try to use social media for there need, people from every age group are involved in social media site or e-commerce site. Nowadays almost every illegal activity is happened using the social network and instant messages. It means that present system is not capable to found all suspicious words. In this paper, we provided a brief description of problem and review on the different framework developed so far. Propose a better system which can be indentify criminal activity through social networking more efficiently. Use Ontology Based Information Extraction (OBIE) technique to identify domain of word and Association Rule mining to generate rules. Heuristic method checks in user database for malicious users according to predefine elements and Naïve Bayes method is use to identify the context behind the message or post. The experimental result is used for further action on victim by cyber crime department.