Biblio
Content delivery such as P2P or video streaming generates the main part of the Internet traffic and Content Centric Network (CCN) appears as an appropriate architecture to satisfy the user needs. However, the lack of scalable routing scheme is one of the main obstacles that slows down a large deployment of CCN at an Internet-scale. In this paper we propose to use the Software-Defined Networking (SDN) paradigm to decouple data plane and control plane and present SRSC, a new routing scheme for CCN. Our solution is a clean-slate approach using only CCN messages and the SDN paradigm. We implemented our solution into the NS-3 simulator and perform simulations of our proposal. SRSC shows better performances than the flooding scheme used by default in CCN: it reduces the number of messages, while still improves CCN caching performances.
Data security has always been a major concern and a huge challenge for governments and individuals throughout the world since early times. Recent advances in technology, such as the introduction of cloud computing, make it even a bigger challenge to keep data secure. In parallel, high throughput mobile devices such as smartphones and tablets are designed to support these new technologies. The high throughput requires power-efficient designs to maintain the battery-life. In this paper, we propose a novel Joint Security and Advanced Low Density Parity Check (LDPC) Coding (JSALC) method. The JSALC is composed of two parts: the Joint Security and Advanced LDPC-based Encryption (JSALE) and the dual-step Secure LDPC code for Channel Coding (SLCC). The JSALE is obtained by interlacing Advanced Encryption System (AES)-like rounds and Quasi-Cyclic (QC)-LDPC rows into a single primitive. Both the JSALE code and the SLCC code share the same base quasi-cyclic parity check matrix (PCM) which retains the power efficiency compared to conventional systems. We show that the overall JSALC Frame-Error-Rate (FER) performance outperforms other cryptcoding methods by over 1.5 dB while maintaining the AES-128 security level. Moreover, the JSALC enables error resilience and has higher diffusion than AES-128.
With the growth of technology, designs became more complex and may contain bugs. This makes verification an indispensable part in product development. UVM describe a standard method for verification of designs which is reusable and portable. This paper verifies IIC bus protocol using Universal Verification Methodology. IIC controller is designed in Verilog using Vivado. It have APB interface and its function and code coverage is carried out in Mentor graphic Questasim 10.4e. This work achieved 83.87% code coverage and 91.11% functional coverage.
This paper revealed the development and implementation of the wearable sensors based on transient responses of textile chemical sensors for odorant detection system as wearable sensor of humanoid robot. The textile chemical sensors consist of nine polymer/CNTs nano-composite gas sensors which can be divided into three different prototypes of the wearable humanoid robot; (i) human axillary odor monitoring, (ii) human foot odor tracking, and (iii) wearable personal gas leakage detection. These prototypes can be integrated into high-performance wearable wellness platform such as smart clothes, smart shoes and wearable pocket toxic-gas detector. While operating mode has been designed to use ZigBee wireless communication technology for data acquisition and monitoring system. Wearable humanoid robot offers several platforms that can be applied to investigate the role of individual scent produced by different parts of the human body such as axillary odor and foot odor, which have potential health effects from abnormal or offensive body odor. Moreover, wearable personal safety and security component in robot is also effective for detecting NH3 leakage in environment. Preliminary results with nine textile chemical sensors for odor biomarker and NH3 detection demonstrates the feasibility of using the wearable humanoid robot to distinguish unpleasant odor released when you're physically active. It also showed an excellent performance to detect a hazardous gas like ammonia (NH3) with sensitivity as low as 5 ppm.
Processes to automate the selection of appropriate algorithms for various matrix computations are described. In particular, processes to check for, and certify, various matrix properties of black-box matrices are presented. These include sparsity patterns and structural properties that allow "superfast" algorithms to be used in place of black-box algorithms. Matrix properties that hold generically, and allow the use of matrix preconditioning to be reduced or eliminated, can also be checked for and certified –- notably including in the small-field case, where this presently has the greatest impact on the efficiency of the computation.
Information, not just data, is key to today's global challenges. To solve these challenges requires not only advancing geospatial and big data analytics but requires new analysis and decision-making environments that enable reliable decisions from trustable, understandable information that go beyond current approaches to machine learning and artificial intelligence. These environments are successful when they effectively couple human decision making with advanced, guided spatial analytics in human-computer collaborative discourse and decision making (HCCD). Our HCCD approach builds upon visual analytics, natural scale templates, traceable information, human-guided analytics, and explainable and interactive machine learning, focusing on empowering the decisionmaker through interactive visual spatial analytic environments where non-digital human expertise and experience can be combined with state-of-the-art and transparent analytical techniques. When we combine this approach with real-world application-driven research, not only does the pace of scientific innovation accelerate, but impactful change occurs. I'll describe how we have applied these techniques to challenges in sustainability, security, resiliency, public safety, and disaster management.
In recent years, behavioral biometrics have become a popular approach to support continuous authentication systems. Most generally, a continuous authentication system can make two types of errors: false rejects and false accepts. Based on this, the most commonly reported metrics to evaluate systems are the False Reject Rate (FRR) and False Accept Rate (FAR). However, most papers only report the mean of these measures with little attention paid to their distribution. This is problematic as systematic errors allow attackers to perpetually escape detection while random errors are less severe. Using 16 biometric datasets we show that these systematic errors are very common in the wild. We show that some biometrics (such as eye movements) are particularly prone to systematic errors, while others (such as touchscreen inputs) show more even error distributions. Our results also show that the inclusion of some distinctive features lowers average error rates but significantly increases the prevalence of systematic errors. As such, blind optimization of the mean EER (through feature engineering or selection) can sometimes lead to lower security. Following this result we propose the Gini Coefficient (GC) as an additional metric to accurately capture different error distributions. We demonstrate the usefulness of this measure both to compare different systems and to guide researchers during feature selection. In addition to the selection of features and classifiers, some non- functional machine learning methodologies also affect error rates. The most notable examples of this are the selection of training data and the attacker model used to develop the negative class. 13 out of the 25 papers we analyzed either include imposter data in the negative class or randomly sample training data from the entire dataset, with a further 6 not giving any information on the methodology used. Using real-world data we show that both of these decisions lead to significant underestimation of error rates by 63% and 81%, respectively. This is an alarming result, as it suggests that researchers are either unaware of the magnitude of these effects or might even be purposefully attempting to over-optimize their EER without actually improving the system.
The regularity of devastating cyber-attacks has made cybersecurity a grand societal challenge. Many cybersecurity professionals are closely examining the international Dark Web to proactively pinpoint potential cyber threats. Despite its potential, the Dark Web contains hundreds of thousands of non-English posts. While machine translation is the prevailing approach to process non-English text, applying MT on hacker forum text results in mistranslations. In this study, we draw upon Long-Short Term Memory (LSTM), Cross-Lingual Knowledge Transfer (CLKT), and Generative Adversarial Networks (GANs) principles to design a novel Adversarial CLKT (A-CLKT) approach. A-CLKT operates on untranslated text to retain the original semantics of the language and leverages the collective knowledge about cyber threats across languages to create a language invariant representation without any manual feature engineering or external resources. Three experiments demonstrate how A-CLKT outperforms state-of-the-art machine learning, deep learning, and CLKT algorithms in identifying cyber-threats in French and Russian forums.
As the connectivity within manufacturing processes increases in light of Industry 4.0, information security becomes a pressing issue for product suppliers, systems integrators, and asset owners. Reaching new heights in digitizing the manufacturing industry also provides more targets for cyber attacks, hence, cyber-physical production systems (CPPSs) must be adequately secured to prevent malicious acts. To achieve a sufficient level of security, proper defense mechanisms must be integrated already early on in the systems' lifecycle and not just eventually in the operation phase. Although standardization efforts exist with the objective of guiding involved stakeholders toward the establishment of a holistic industrial security concept (e.g., IEC 62443), a dedicated security development lifecycle for systems integrators is missing. This represents a major challenge for engineers who lack sufficient information security knowledge, as they may not be able to identify security-related activities that can be performed along the production systems engineering (PSE) process. In this paper, we propose a novel methodology named Security Development Lifecycle for Cyber-Physical Production Systems (SDL-CPPS) that aims to foster security by design for CPPSs, i.e., the engineering of smart production systems with security in mind. More specifically, we derive security-related activities based on (i) security standards and guidelines, and (ii) relevant literature, leading to a security-improved PSE process that can be implemented by systems integrators. Furthermore, this paper informs domain experts on how they can conduct these security-enhancing activities and provides pointers to relevant works that may fill the potential knowledge gap. Finally, we review the proposed approach by means of discussions in a workshop setting with technical managers of an Austrian-based systems integrator to identify barriers to adopting the SDL-CPPS.
Digital twins open up new possibilities in terms of monitoring, simulating, optimizing and predicting the state of cyber-physical systems (CPSs). Furthermore, we argue that a fully functional, virtual replica of a CPS can also play an important role in securing the system. In this work, we present a framework that allows users to create and execute digital twins, closely matching their physical counterparts. We focus on a novel approach to automatically generate the virtual environment from specification, taking advantage of engineering data exchange formats. From a security perspective, an identical (in terms of the system's specification), simulated environment can be freely explored and tested by security professionals, without risking negative impacts on live systems. Going a step further, security modules on top of the framework support security analysts in monitoring the current state of CPSs. We demonstrate the viability of the framework in a proof of concept, including the automated generation of digital twins and the monitoring of security and safety rules.
Communicating vehicles will change road traffic as we know it. With current versions of European and US standards in mind, the authors discuss privacy and traffic surveillance issues in vehicular network technology and outline research directions that could address these issues.
Security is becoming a major concern in computing. New techniques are evolving every day; one of these techniques is Hash Visualization. Hash Visualization uses complex random generated images for security, these images can be used to hide data (watermarking). This proposed new technique improves hash visualization by using genetic algorithms. Genetic algorithms are a search optimization technique that is based on the evolution of living creatures. The proposed technique uses genetic algorithms to improve hash visualization. The used genetic algorithm was away faster than traditional previous ones, and it improved hash visualization by evolving the tree that was used to generate the images, in order to obtain a better and larger tree that will generate images with higher security. The security was satisfied by calculating the fitness value for each chromosome based on a specifically designed algorithm.
The Business Intelligence (BI) paradigm is challenged by emerging use cases such as news and social media analytics in which the source data are unstructured, the analysis metrics are unspecified, and the appropriate visual representations are unsupported by mainstream tools. This case study documents the work undertaken in Microsoft Research to enable these use cases in the Microsoft Power BI product. Our approach comprises: (a) back-end pipelines that use AI to infer navigable data structures from streams of unstructured text, media and metadata; and (b) front-end representations of these structures grounded in the Visual Analytics literature. Through our creation of multiple end-to-end data applications, we learned that representing the varying quality of inferred data structures was crucial for making the use and limitations of AI transparent to users. We conclude with reflections on BI in the age of AI, big data, and democratized access to data analytics.
Advancements in semiconductor domain gave way to realize numerous applications in Video Surveillance using Computer vision and Deep learning, Video Surveillances in Industrial automation, Security, ADAS, Live traffic analysis etc. through image understanding improves efficiency. Image understanding requires input data with high precision which is dependent on Image resolution and location of camera. The data of interest can be thermal image or live feed coming for various sensors. Composite(CVBS) is a popular video interface capable of streaming upto HD(1920x1080) quality. Unlike high speed serial interfaces like HDMI/MIPI CSI, Analog composite video interface is a single wire standard supporting longer distances. Image understanding requires edge detection and classification for further processing. Sobel filter is one the most used edge detection filter which can be embedded into live stream. This paper proposes Zynq FPGA based system design for video surveillance with Sobel edge detection, where the input Composite video decoded (Analog CVBS input to YCbCr digital output), processed in HW and streamed to HDMI display simultaneously storing in SD memory for later processing. The HW design is scalable for resolutions from VGA to Full HD for 60fps and 4K for 24fps. The system is built on Xilinx ZC702 platform and TVP5146 to showcase the functional path.
The need for data exchange and storage is currently increasing. The increased need for data exchange and storage also increases the need for data exchange devices and media. One of the most commonly used media exchanges and data storage is the USB Flash Drive. USB Flash Drive are widely used because they are easy to carry and have a fairly large storage. Unfortunately, this increased need is not directly proportional to an increase in awareness of device security, both for USB flash drive devices and computer devices that are used as primary storage devices. This research shows the threats that can arise from the use of USB Flash Drive devices. The threat that is used in this research is the fork bomb implemented on an Arduino Pro Micro device that is converted to a USB Flash drive. The purpose of the Fork Bomb is to damage the memory performance of the affected devices. As a result, memory performance to execute the process will slow down. The use of a USB Flash drive as an attack vector with the fork bomb method causes users to not be able to access the operating system that was attacked. The results obtained indicate that the USB Flash Drive can be used as a medium of Fork Bomb attack on the Windows operating system.