Biblio
The popularity of mobile devices and the enormous number of third party mobile applications in the market have naturally lead to several vulnerabilities being identified and abused. This is coupled with the immaturity of intrusion detection system (IDS) technology targeting mobile devices. In this paper we propose a modular host-based IDS framework for mobile devices that uses behavior analysis to profile applications on the Android platform. Anomaly detection can then be used to categorize malicious behavior and alert users. The proposed system accommodates different detection algorithms, and is being tested at a major telecom operator in North America. This paper highlights the architecture, findings, and lessons learned.
Present work deals with prediction of damage location in a composite cantilever beam using signal from optical fiber sensor coupled with a neural network with back propagation based learning mechanism. The experimental study uses glass/epoxy composite cantilever beam. Notch perpendicular to the axis of the beam and spanning throughout the width of the beam is introduced at three different locations viz. at the middle of the span, towards the free end of the beam and towards the fixed end of the beam. A plastic optical fiber of 6 cm gage length is mounted on the top surface of the beam along the axis of the beam exactly at the mid span. He-Ne laser is used as light source for the optical fiber and light emitting from other end of the fiber is converted to electrical signal through a converter. A three layer feed forward neural network architecture is adopted having one each input layer, hidden layer and output layer. Three features are extracted from the signal viz. resonance frequency, normalized amplitude and normalized area under resonance frequency. These three features act as inputs to the neural network input layer. The outputs qualitatively identify the location of the notch.
Accurately modeling human decision-making in security is critical to thinking about when, why, and how to recommend that users adopt certain secure behaviors. In this work, we conduct behavioral economics experiments to model the rationality of end-user security decision-making in a realistic online experimental system simulating a bank account. We ask participants to make a financially impactful security choice, in the face of transparent risks of account compromise and benefits offered by an optional security behavior (two-factor authentication). We measure the cost and utility of adopting the security behavior via measurements of time spent executing the behavior and estimates of the participant's wage. We find that more than 50% of our participants made rational (e.g., utility optimal) decisions, and we find that participants are more likely to behave rationally in the face of higher risk. Additionally, we find that users' decisions can be modeled well as a function of past behavior (anchoring effects), knowledge of costs, and to a lesser extent, users' awareness of risks and context (R2=0.61). We also find evidence of endowment effects, as seen in other areas of economic and psychological decision-science literature, in our digital-security setting. Finally, using our data, we show theoretically that a "one-size-fits-all" emphasis on security can lead to market losses, but that adoption by a subset of users with higher risks or lower costs can lead to market gains.
Most Web sites rely on resources hosted by third parties such as CDNs. Third parties may be compromised or coerced into misbehaving, e.g. delivering a malicious script or stylesheet. Unexpected changes to resources hosted by third parties can be detected with the Subresource Integrity (SRI) mechanism. The focus of SRI is on scripts and stylesheets. Web fonts cannot be secured with that mechanism under all circumstances. The first contribution of this paper is to evaluates the potential for attacks using malicious fonts. With an instrumented browser we find that (1) more than 95% of the top 50,000 Web sites of the Tranco top list rely on resources hosted by third parties and that (2) only a small fraction employs SRI. Moreover, we find that more than 60% of the sites in our sample use fonts hosted by third parties, most of which are being served by Google. The second contribution of the paper is a proof of concept of a malicious font as well as a tool for automatically generating such a font, which targets security-conscious users who are used to verifying cryptographic fingerprints. Software vendors publish such fingerprints along with their software packages to allow users to verify their integrity. Due to incomplete SRI support for Web fonts, a third party could force a browser to load our malicious font. The font targets a particular cryptographic fingerprint and renders it as a desired different fingerprint. This allows attackers to fool users into believing that they download a genuine software package although they are actually downloading a maliciously modified version. Finally, we propose countermeasures that could be deployed to protect the integrity of Web fonts.
Cybercrimes and cyber criminals widely use dark web and illegal functionalities of the dark web towards the world crisis. More than half of the criminal activities and the terror activities conducted through the dark web such as, cryptocurrency, selling human organs, red rooms, child pornography, arm deals, drug deals, hire assassins and hackers, hacking software and malware programs, etc. The law enforcement agencies such as FBI, NSA, Interpol, Mossad, FSB etc, are always conducting surveillance programs through the dark web to trace down the mass criminals and terrorists while stopping the crimes and the terror activities. This paper is about the dark web marketing and surveillance programs. In the deep end research will discuss the dark web access with securely and how the law enforcement agencies exponentially tracking down the users with terror behaviours and activities. Moreover, the paper discusses dark web sites which users can grab the dark web jihadist services and anonymous markets including safety precautions.
With the rapid development of the Internet, the dark network has also been widely used in the Internet [1]. Due to the anonymity of the dark network, many illegal elements have committed illegal crimes on the dark. It is difficult for law enforcement officials to track the identity of these cyber criminals using traditional network survey techniques based on IP addresses [2]. The threat information is mainly from the dark web forum and the dark web market. In this paper, we introduce the current mainstream dark network communication system TOR and develop a visual dark web forum post association analysis system to graphically display the relationship between various forum messages and posters, and help law enforcement officers to explore deep levels. Clues to analyze crimes in the dark network.
Visible Light Communication (VLC) emerges as a new wireless communication technology with appealing benefits not present in radio communication. However, current VLC designs commonly require LED lights to emit shining light beams, which greatly limits the applicable scenarios of VLC (e.g., in a sunny day when indoor lighting is not needed). It also entails high energy overhead and unpleasant visual experiences for mobile devices to transmit data using VLC. We design and develop DarkLight, a new VLC primitive that allows light-based communication to be sustained even when LEDs emit extremely-low luminance. The key idea is to encode data into ultra-short, imperceptible light pulses. We tackle challenges in circuit designs, data encoding/decoding schemes, and DarkLight networking, to efficiently generate and reliably detect ultra-short light pulses using off-the-shelf, low-cost LEDs and photodiodes. Our DarkLight prototype supports 1.3-m distance with 1.6-Kbps data rate. By loosening up VLC's reliance on visible light beams, DarkLight presents an unconventional direction of VLC design and fundamentally broadens VLC's application scenarios.
As embedded devices (under the guise of "smart-whatever") rapidly proliferate into many domains, they become attractive targets for malware. Protecting them from software and physical attacks becomes both important and challenging. Remote attestation is a basic tool for mitigating such attacks. It allows a trusted party (verifier) to remotely assess software integrity of a remote, untrusted, and possibly compromised, embedded device (prover). Prior remote attestation methods focus on software (malware) attacks in a one-verifier/one-prover setting. Physical attacks on provers are generally ruled out as being either unrealistic or impossible to mitigate. In this paper, we argue that physical attacks must be considered, particularly, in the context of many provers, e.g., a network, of devices. As- suming that physical attacks require capture and subsequent temporary disablement of the victim device(s), we propose DARPA, a light-weight protocol that takes advantage of absence detection to identify suspected devices. DARPA is resilient against a very strong adversary and imposes minimal additional hardware requirements. We justify and identify DARPA's design goals and evaluate its security and costs.
Mixed-Criticality Systems (MCS) are real-time systems characterized by two or more distinct levels of criticality. In MCS, it is imperative that high-critical flows meet their deadlines while low critical flows can tolerate some delays. Sharing resources between flows in Network-On-Chip (NoC) can lead to different unpredictable latencies and subsequently complicate the implementation of MCS in many-core architectures. This paper proposes a new virtual channel router designed for MCS deployed over NoCs. The first objective of this router is to reduce the worst-case communication latency of high-critical flows. The second aim is to improve the network use rate and reduce the communication latency for low-critical flows. The proposed router, called DAS (Double Arbiter and Switching router), jointly uses Wormhole and Store And Forward techniques for low and high-critical flows respectively. Simulations with a cycle-accurate SystemC NoC simulator show that, with a 15% network use rate, the communication delay of high-critical flows is reduced by 80% while communication delay of low-critical flow is increased by 18% compared to usual solutions based on routers with multiple virtual channels.
With the rapid development of smart grid, smart meters are deployed at energy consumers' premises to collect real-time usage data. Although such a communication model can help the control center of the energy producer to improve the efficiency and reliability of electricity delivery, it also leads to some security issues. For example, this real-time data involves the customers' privacy. Attackers may violate the privacy for house breaking, or they may tamper with the transmitted data for their own benefits. For this purpose, many data aggregation schemes are proposed for privacy preservation. However, rare of them cares about both the data aggregation and fine-grained access control to improve the data utility. In this paper, we proposes a data aggregation scheme based on attribute decision tree. Security analysis illustrates that our scheme can achieve the data integrity, data privacy preservation and fine- grained data access control. Experiment results show that our scheme are more efficient than existing schemes.
Security of VMs is now becoming a hot topic due to their outsourcing in cloud computing paradigm. All VMs present on the network are connected to each other, making exploited VMs danger to other VMs. and threats to organization. Rejuvenation of virtualization brought the emergence of hyper-visor based security services like VMI (Virtual machine introspection). As there is a greater chance for any intrusion detection system running on the same system, of being dis-abled by the malware or attacker. Monitoring of VMs using VMI, is one of the most researched and accepted technique, that is used to ensure computer systems security mostly in the paradigm of cloud computing. This thesis presents a work that is to integrate LibVMI with Volatility on a KVM, a Linux based hypervisor, to introspect memory of VMs. Both of these tools are used to monitor the state of live VMs. VMI capability of monitoring VMs is combined with the malware analysis and virtual honeypots to achieve the objective of this project. A testing environment is deployed, where a network of VMs is used to be introspected using Volatility plug-ins. Time execution of each plug-in executed on live VMs is calculated to observe the performance of Volatility plug-ins. All these VMs are deployed as Virtual Honeypots having honey-pots configured on them, which is used as a detection mechanism to trigger alerts when some malware attack the VMs. Using STIX (Structure Threat Information Expression), extracted IOCs are converted into the understandable, flexible, structured and shareable format.
These days the digitization process is everywhere, spreading also across central governments and local authorities. It is hoped that, using open government data for scientific research purposes, the public good and social justice might be enhanced. Taking into account the European General Data Protection Regulation recently adopted, the big challenge in Portugal and other European countries, is how to provide the right balance between personal data privacy and data value for research. This work presents a sensitivity study of data anonymization procedure applied to a real open government data available from the Brazilian higher education evaluation system. The ARX k-anonymization algorithm, with and without generalization of some research value variables, was performed. The analysis of the amount of data / information lost and the risk of re-identification suggest that the anonymization process may lead to the under-representation of minorities and sociodemographic disadvantaged groups. It will enable scientists to improve the balance among risk, data usability, and contributions for the public good policies and practices.
With the rapid increase in the use of mobile devices in people's daily lives, mobile data traffic is exploding in recent years. In the edge computing environment where edge servers are deployed around mobile users, caching popular data on edge servers can ensure mobile users' fast access to those data and reduce the data traffic between mobile users and the centralized cloud. Existing studies consider the data cache problem with a focus on the reduction of network delay and the improvement of mobile devices' energy efficiency. In this paper, we attack the data caching problem in the edge computing environment from the service providers' perspective, who would like to maximize their venues of caching their data. This problem is complicated because data caching produces benefits at a cost and there usually is a trade-off in-between. In this paper, we formulate the data caching problem as an integer programming problem, and maximizes the revenue of the service provider while satisfying a constraint for data access latency. Extensive experiments are conducted on a real-world dataset that contains the locations of edge servers and mobile users, and the results reveal that our approach significantly outperform the baseline approaches.
Application of trust principals in internet of things (IoT) has allowed to provide more trustworthy services among the corresponding stakeholders. The most common method of assessing trust in IoT applications is to estimate trust level of the end entities (entity-centric) relative to the trustor. In these systems, trust level of the data is assumed to be the same as the trust level of the data source. However, most of the IoT based systems are data centric and operate in dynamic environments, which need immediate actions without waiting for a trust report from end entities. We address this challenge by extending our previous proposals on trust establishment for entities based on their reputation, experience and knowledge, to trust estimation of data items [1-3]. First, we present a hybrid trust framework for evaluating both data trust and entity trust, which will be enhanced as a standardization for future data driven society. The modules including data trust metric extraction, data trust aggregation, evaluation and prediction are elaborated inside the proposed framework. Finally, a possible design model is described to implement the proposed ideas.
Detecting and repairing dirty data is one of the perennial challenges in data analytics, and failure to do so can result in inaccurate analytics and unreliable decisions. Over the past few years, there has been a surge of interest from both industry and academia on data cleaning problems including new abstractions, interfaces, approaches for scalability, and statistical techniques. To better understand the new advances in the field, we will first present a taxonomy of the data cleaning literature in which we highlight the recent interest in techniques that use constraints, rules, or patterns to detect errors, which we call qualitative data cleaning. We will describe the state-of-the-art techniques and also highlight their limitations with a series of illustrative examples. While traditionally such approaches are distinct from quantitative approaches such as outlier detection, we also discuss recent work that casts such approaches into a statistical estimation framework including: using Machine Learning to improve the efficiency and accuracy of data cleaning and considering the effects of data cleaning on statistical analysis.
With the increase of mobile equipment and transmission data, Common Public Radio Interface (CPRI) between Building Base band Unit (BBU) and Remote Radio Unit (RRU) suffers amounts of increasing transmission data. It is essential to compress the data in CPRI if more data should be transferred without congestion under the premise of restriction of fiber consumption. A data compression scheme based on Discrete Sine Transform (DST) and Lloyd-Max quantization is proposed in distributed Base Station (BS) architecture. The time-domain samples are transformed by DST according to the characteristics of Orthogonal Frequency Division Multiplexing (OFDM) baseband signals, and then the coefficients after transformation are quantified by the Lloyd-Max quantizer. The simulation results show that the proposed scheme can work at various Compression Ratios (CRs) while the values of Error Vector Magnitude (EVM) are better than the limits in 3GPP.
We consider the problem of attack detection for IoT networks based only on passively collected network parameters. For the first time in the literature, we develop a blind attack detection method based on data conformity evaluation. Network parameters collected passively, are converted to their conformity values through iterative projections on refined L1-norm tensor subspaces. We demonstrate our algorithmic development in a case study for a simulated star topology network. Type of attack, affected devices, as well as, attack time frame can be easily identified.