Visible to the public Biblio

Filters: Keyword is Noise  [Clear All Filters]
Chiang, M., Lau, S..  2011.  Automatic multiple faces tracking and detection using improved edge detector algorithm. 2011 7th International Conference on Information Technology in Asia. :1—5.

The automatic face tracking and detection has been one of the fastest developing areas due to its wide range of application, security and surveillance application in particular. It has been one of the most interest subjects, which suppose but yet to be wholly explored in various research areas due to various distinctive factors: varying ethnic groups, sizes, orientations, poses, occlusions and lighting conditions. The focus of this paper is to propose an improve algorithm to speed up the face tracking and detection process with the simple and efficient proposed novel edge detector to reject the non-face-likes regions, hence reduce the false detection rate in an automatic face tracking and detection in still images with multiple faces for facial expression system. The correct rates of 95.9% on the Haar face detection and proposed novel edge detector, which is higher 6.1% than the primitive integration of Haar and canny edge detector.

Jiang, M., Lundgren, J., Pasha, S., Carratù, M., Liguori, C., Thungström, G..  2020.  Indoor Silent Object Localization using Ambient Acoustic Noise Fingerprinting. 2020 IEEE International Instrumentation and Measurement Technology Conference (I2MTC). :1—6.

Indoor localization has been a popular research subject in recent years. Usually, object localization using sound involves devices on the objects, acquiring data from stationary sound sources, or by localizing the objects with external sensors when the object generates sounds. Indoor localization systems using microphones have traditionally also used systems with several microphones, setting the limitations on cost efficiency and required space for the systems. In this paper, the goal is to investigate whether it is possible for a stationary system to localize a silent object in a room, with only one microphone and ambient noise as information carrier. A subtraction method has been combined with a fingerprint technique, to define and distinguish the noise absorption characteristic of the silent object in the frequency domain for different object positions. The absorption characteristics of several positions of the object is taken as comparison references, serving as fingerprints of known positions for an object. With the experiment result, the tentative idea has been verified as feasible, and noise signal based lateral localization of silent objects can be achieved.

Hyunki-Kim, Jinhyeok-Oh, Changuk-Jang, Okyeon-Yi, Juhong-Han, Hansaem-Wi, Chanil-Park.  2019.  Analysis of the Noise Source Entropy Used in OpenSSL’s Random Number Generation Mechanism. 2019 International Conference on Information and Communication Technology Convergence (ICTC). :59–62.
OpenSSL is an open source library that implements the Secure Socket Layer (SSL), a security protocol used by the TCP/IP layer. All cryptographic systems require random number generation for many reasons, such as cryptographic key generation and protocol challenge/response, OpenSSL is also the same. OpenSSL can be run on a variety of operating systems. especially when generating random numbers on Unix-like operating systems, it can use /dev /(u)random [6], as a seed to add randomness. In this paper, we analyze the process provided by OpenSSL when random number generation is required. We also provide considerations for application developers and OpenSSL users to use /dev/urandom and real-time clock (nanoseconds of timespec structure) as a seed to generate cryptographic random numbers in the Unix family.
Han, Chihye, Yoon, Wonjun, Kwon, Gihyun, Kim, Daeshik, Nam, Seungkyu.  2019.  Representation of White- and Black-Box Adversarial Examples in Deep Neural Networks and Humans: A Functional Magnetic Resonance Imaging Study. 2019 International Joint Conference on Neural Networks (IJCNN). :1–8.

The recent success of brain-inspired deep neural networks (DNNs) in solving complex, high-level visual tasks has led to rising expectations for their potential to match the human visual system. However, DNNs exhibit idiosyncrasies that suggest their visual representation and processing might be substantially different from human vision. One limitation of DNNs is that they are vulnerable to adversarial examples, input images on which subtle, carefully designed noises are added to fool a machine classifier. The robustness of the human visual system against adversarial examples is potentially of great importance as it could uncover a key mechanistic feature that machine vision is yet to incorporate. In this study, we compare the visual representations of white- and black-box adversarial examples in DNNs and humans by leveraging functional magnetic resonance imaging (fMRI). We find a small but significant difference in representation patterns for different (i.e. white- versus black-box) types of adversarial examples for both humans and DNNs. However, human performance on categorical judgment is not degraded by noise regardless of the type unlike DNN. These results suggest that adversarial examples may be differentially represented in the human visual system, but unable to affect the perceptual experience.

Tian, Yun, Xu, Wenbo, Qin, Jing, Zhao, Xiaofan.  2018.  Compressive Detection of Random Signals from Sparsely Corrupted Measurements. 2018 International Conference on Network Infrastructure and Digital Content (IC-NIDC). :389-393.

Compressed sensing (CS) integrates sampling and compression into a single step to reduce the processed data amount. However, the CS reconstruction generally suffers from high complexity. To solve this problem, compressive signal processing (CSP) is recently proposed to implement some signal processing tasks directly in the compressive domain without reconstruction. Among various CSP techniques, compressive detection achieves the signal detection based on the CS measurements. This paper investigates the compressive detection problem of random signals when the measurements are corrupted. Different from the current studies that only consider the dense noise, our study considers both the dense noise and sparse error. The theoretical performance is derived, and simulations are provided to verify the derived theoretical results.

Coustans, M., Terrier, C., Eberhardt, T., Salgado, S., Cherkaoui, A., Fesquet, L..  2017.  A subthreshold 30pJ/bit self-timed ring based true random number generator for internet of everything. 2017 IEEE SOI-3D-Subthreshold Microelectronics Technology Unified Conference (S3S). :1–3.

This paper presents a true random number generator that exploits the subthreshold properties of jitter of events propagating in a self-timed ring and jitter of events propagating in an inverter based ring oscillator. Design was implemented in 180nm CMOS flash process. Devices provide high quality random bit sequences passing FIPS 140-2 and NIST SP 800-22 statistical tests which guaranty uniform distribution and unpredictability thanks to the physics based entropy source.

Bando, S., Nozawa, A., Matsuya, Y..  2015.  Multidimensional directed coherence analysis of keystroke dynamics and physiological responses. 2015 International Conference on Noise and Fluctuations (ICNF). :1–4.

Techno-stress has been a problem in recent years with a development of information technology. Various studies have been reported about a relationship between key typing and psychosomatic state. Keystroke dynamics are known as dynamics of a key typing motion. The objective of this paper is to clarify the mechanism between keystroke dynamics and physiological responses. Inter-stroke time (IST) that was the interval between each keystroke was measured as keystroke dynamics. The physiological responses were heart rate variability (HRV) and respiration (Resp). The system consisted of IST, HRV, and Resp was applied multidimensional directed coherence in order to reveal a causal correlation. As a result, it was observed that strength of entrainment of physiological responses having fluctuation to IST differed in surround by the noise and a cognitive load. Specifically, the entrainment became weak as a cognitive resource devoted to IST was relatively increased with the keystroke motion had a robust rhythm. On the other hand, the entrainment became stronger as a cognitive resource devoted to IST was relatively decreased since the resource also devoted to the noise or the cognitive load.

Konstantinou, C., Keliris, A., Maniatakos, M..  2015.  Privacy-preserving functional IP verification utilizing fully homomorphic encryption. 2015 Design, Automation Test in Europe Conference Exhibition (DATE). :333–338.

Intellectual Property (IP) verification is a crucial component of System-on-Chip (SoC) design in the modern IC design business model. Given a globalized supply chain and an increasing demand for IP reuse, IP theft has become a major concern for the IC industry. In this paper, we address the trust issues that arise between IP owners and IP users during the functional verification of an IP core. Our proposed scheme ensures the privacy of IP owners and users, by a) generating a privacy-preserving version of the IP, which is functionally equivalent to the original design, and b) employing homomorphically encrypted input vectors. This allows the functional verification to be securely outsourced to a third-party, or to be executed by either parties, while revealing the least possible information regarding the test vectors and the IP core. Experiments on both combinational and sequential benchmark circuits demonstrate up to three orders of magnitude IP verification slowdown, due to the computationally intensive fully homomorphic operations, for different security parameter sizes.

Saurabh, A., Kumar, A., Anitha, U..  2015.  Performance analysis of various wavelet thresholding techniques for despeckiling of sonar images. 2015 3rd International Conference on Signal Processing, Communication and Networking (ICSCN). :1–7.

Image Denoising nowadays is a great Challenge in the field of image processing. Since Discrete wavelet transform (DWT) is one of the powerful and perspective approaches in the area of image de noising. But fixing an optimal threshold is the key factor to determine the performance of denoising algorithm using (DWT). The optimal threshold can be estimated from the image statistics for getting better performance of denoising in terms of clarity or quality of the images. In this paper we analyzed various methods of denoising from the sonar image by using various thresholding methods (Vishnu Shrink, Bayes Shrink and Neigh Shrink) experimentally and compare the result in terms of various image quality parameters. (PSNR,MSE,SSIM and Entropy). The results of the proposed method show that there is an improvenment in the visual quality of sonar images by suppressing the speckle noise and retaining edge details.

Windisch, G., Kozlovszky, M..  2015.  Image sharpness metrics for digital microscopy. 2015 IEEE 13th International Symposium on Applied Machine Intelligence and Informatics (SAMI). :273–276.

Image sharpness measurements are important parts of many image processing applications. To measure image sharpness multiple algorithms have been proposed and measured in the past but they have been developed with having out-of-focus photographs in mind and they do not work so well with images taken using a digital microscope. In this article we show the difference between images taken with digital cameras, images taken with a digital microscope and artificially blurred images. The conventional sharpness measures are executed on all these categories to measure the difference and a standard image set taken with a digital microscope is proposed and described to serve as a common baseline for further sharpness measures in the field.

Chessa, M., Grossklags, J., Loiseau, P..  2015.  A Game-Theoretic Study on Non-monetary Incentives in Data Analytics Projects with Privacy Implications. 2015 IEEE 28th Computer Security Foundations Symposium. :90–104.

The amount of personal information contributed by individuals to digital repositories such as social network sites has grown substantially. The existence of this data offers unprecedented opportunities for data analytics research in various domains of societal importance including medicine and public policy. The results of these analyses can be considered a public good which benefits data contributors as well as individuals who are not making their data available. At the same time, the release of personal information carries perceived and actual privacy risks to the contributors. Our research addresses this problem area. In our work, we study a game-theoretic model in which individuals take control over participation in data analytics projects in two ways: 1) individuals can contribute data at a self-chosen level of precision, and 2) individuals can decide whether they want to contribute at all (or not). From the analyst's perspective, we investigate to which degree the research analyst has flexibility to set requirements for data precision, so that individuals are still willing to contribute to the project, and the quality of the estimation improves. We study this tradeoffs scenario for populations of homogeneous and heterogeneous individuals, and determine Nash equilibrium that reflect the optimal level of participation and precision of contributions. We further prove that the analyst can substantially increase the accuracy of the analysis by imposing a lower bound on the precision of the data that users can reveal.

A. Roy, S. P. Maity.  2015.  "On segmentation of CS reconstructed MR images". 2015 Eighth International Conference on Advances in Pattern Recognition (ICAPR). :1-6.

This paper addresses the issue of magnetic resonance (MR) Image reconstruction at compressive sampling (or compressed sensing) paradigm followed by its segmentation. To improve image reconstruction problem at low measurement space, weighted linear prediction and random noise injection at unobserved space are done first, followed by spatial domain de-noising through adaptive recursive filtering. Reconstructed image, however, suffers from imprecise and/or missing edges, boundaries, lines, curvatures etc. and residual noise. Curvelet transform is purposely used for removal of noise and edge enhancement through hard thresholding and suppression of approximate sub-bands, respectively. Finally Genetic algorithms (GAs) based clustering is done for segmentation of sharpen MR Image using weighted contribution of variance and entropy values. Extensive simulation results are shown to highlight performance improvement of both image reconstruction and segmentation problems.

S. Chen, F. Xi, Z. Liu, B. Bao.  2015.  "Quadrature compressive sampling of multiband radar signals at sub-Landau rate". 2015 IEEE International Conference on Digital Signal Processing (DSP). :234-238.

Sampling multiband radar signals is an essential issue of multiband/multifunction radar. This paper proposes a multiband quadrature compressive sampling (MQCS) system to perform the sampling at sub-Landau rate. The MQCS system randomly projects the multiband signal into a compressive multiband one by modulating each subband signal with a low-pass signal and then samples the compressive multiband signal at Landau-rate with output of compressive measurements. The compressive inphase and quadrature (I/Q) components of each subband are extracted from the compressive measurements respectively and are exploited to recover the baseband I/Q components. As effective bandwidth of the compressive multiband signal is much less than that of the received multiband one, the sampling rate is much less than Landau rate of the received signal. Simulation results validate that the proposed MQCS system can effectively acquire and reconstruct the baseband I/Q components of the multiband signals.

S. R. Islam, S. P. Maity, A. K. Ray.  2015.  "On compressed sensing image reconstruction using linear prediction in adaptive filtering". 2015 International Conference on Advances in Computing, Communications and Informatics (ICACCI). :2317-2323.

Compressed sensing (CS) or compressive sampling deals with reconstruction of signals from limited observations/ measurements far below the Nyquist rate requirement. This is essential in many practical imaging system as sampling at Nyquist rate may not always be possible due to limited storage facility, slow sampling rate or the measurements are extremely expensive e.g. magnetic resonance imaging (MRI). Mathematically, CS addresses the problem for finding out the root of an unknown distribution comprises of unknown as well as known observations. Robbins-Monro (RM) stochastic approximation, a non-parametric approach, is explored here as a solution to CS reconstruction problem. A distance based linear prediction using the observed measurements is done to obtain the unobserved samples followed by random noise addition to act as residual (prediction error). A spatial domain adaptive Wiener filter is then used to diminish the noise and to reveal the new features from the degraded observations. Extensive simulation results highlight the relative performance gain over the existing work.

Pajic, M., Weimer, J., Bezzo, N., Tabuada, P., Sokolsky, O., Insup Lee, Pappas, G.J..  2014.  Robustness of attack-resilient state estimators. Cyber-Physical Systems (ICCPS), 2014 ACM/IEEE International Conference on. :163-174.

The interaction between information technology and phys ical world makes Cyber-Physical Systems (CPS) vulnerable to malicious attacks beyond the standard cyber attacks. This has motivated the need for attack-resilient state estimation. Yet, the existing state-estimators are based on the non-realistic assumption that the exact system model is known. Consequently, in this work we present a method for state estimation in presence of attacks, for systems with noise and modeling errors. When the the estimated states are used by a state-based feedback controller, we show that the attacker cannot destabilize the system by exploiting the difference between the model used for the state estimation and the real physical dynamics of the system. Furthermore, we describe how implementation issues such as jitter, latency and synchronization errors can be mapped into parameters of the state estimation procedure that describe modeling errors, and provide a bound on the state-estimation error caused by modeling errors. This enables mapping control performance requirements into real-time (i.e., timing related) specifications imposed on the underlying platform. Finally, we illustrate and experimentally evaluate this approach on an unmanned ground vehicle case-study.

Tong Liu, Qian Xu, Yuejun Li.  2014.  Adaptive filtering design for in-motion alignment of INS. Control and Decision Conference (2014 CCDC), The 26th Chinese. :2669-2674.

Misalignment angles estimation of strapdown inertial navigation system (INS) using global positioning system (GPS) data is highly affected by measurement noises, especially with noises displaying time varying statistical properties. Hence, adaptive filtering approach is recommended for the purpose of improving the accuracy of in-motion alignment. In this paper, a simplified form of Celso's adaptive stochastic filtering is derived and applied to estimate both the INS error states and measurement noise statistics. To detect and bound the influence of outliers in INS/GPS integration, outlier detection based on jerk tracking model is also proposed. The accuracy and validity of the proposed algorithm is tested through ground based navigation experiments.

Thu Trang Le, Atto, A.M., Trouvé, E., Nicolas, J.-M..  2014.  Adaptive Multitemporal SAR Image Filtering Based on the Change Detection Matrix. Geoscience and Remote Sensing Letters, IEEE. 11:1826-1830.

This letter presents an adaptive filtering approach of synthetic aperture radar (SAR) image times series based on the analysis of the temporal evolution. First, change detection matrices (CDMs) containing information on changed and unchanged pixels are constructed for each spatial position over the time series by implementing coefficient of variation (CV) cross tests. Afterward, the CDM provides for each pixel in each image an adaptive spatiotemporal neighborhood, which is used to derive the filtered value. The proposed approach is illustrated on a time series of 25 ascending TerraSAR-X images acquired from November 6, 2009 to September 25, 2011 over the Chamonix-Mont-Blanc test-site, which includes different kinds of change, such as parking occupation, glacier surface evolution, etc.

Bin Sun, Shutao Li, Jun Sun.  2014.  Scanned Image Descreening With Image Redundancy and Adaptive Filtering. Image Processing, IEEE Transactions on. 23:3698-3710.

Currently, most electrophotographic printers use halftoning technique to print continuous tone images, so scanned images obtained from such hard copies are usually corrupted by screen like artifacts. In this paper, a new model of scanned halftone image is proposed to consider both printing distortions and halftone patterns. Based on this model, an adaptive filtering based descreening method is proposed to recover high quality contone images from the scanned images. Image redundancy based denoising algorithm is first adopted to reduce printing noise and attenuate distortions. Then, screen frequency of the scanned image and local gradient features are used for adaptive filtering. Basic contone estimate is obtained by filtering the denoised scanned image with an anisotropic Gaussian kernel, whose parameters are automatically adjusted with the screen frequency and local gradient information. Finally, an edge-preserving filter is used to further enhance the sharpness of edges to recover a high quality contone image. Experiments on real scanned images demonstrate that the proposed method can recover high quality contone images from the scanned images. Compared with the state-of-the-art methods, the proposed method produces very sharp edges and much cleaner smooth regions.

Jian Wang, Lin Mei, Yi Li, Jian-Ye Li, Kun Zhao, Yuan Yao.  2014.  Variable Window for Outlier Detection and Impulsive Noise Recognition in Range Images. Cluster, Cloud and Grid Computing (CCGrid), 2014 14th IEEE/ACM International Symposium on. :857-864.

To improve comprehensive performance of denoising range images, an impulsive noise (IN) denoising method with variable windows is proposed in this paper. Founded on several discriminant criteria, the principles of dropout IN detection and outlier IN detection are provided. Subsequently, a nearest non-IN neighbors searching process and an Index Distance Weighted Mean filter is combined for IN denoising. As key factors of adapatablity of the proposed denoising method, the sizes of two windows for outlier INs detection and INs denoising are investigated. Originated from a theoretical model of invader occlusion, variable window is presented for adapting window size to dynamic environment of each point, accompanying with practical criteria of adaptive variable window size determination. Experiments on real range images of multi-line surface are proceeded with evaluations in terms of computational complexity and quality assessment with comparison analysis among a few other popular methods. It is indicated that the proposed method can detect the impulsive noises with high accuracy, meanwhile, denoise them with strong adaptability with the help of variable window.

Dirik, A.E., Sencar, H.T., Memon, N..  2014.  Analysis of Seam-Carving-Based Anonymization of Images Against PRNU Noise Pattern-Based Source Attribution. Information Forensics and Security, IEEE Transactions on. 9:2277-2290.

The availability of sophisticated source attribution techniques raises new concerns about privacy and anonymity of photographers, activists, and human right defenders who need to stay anonymous while spreading their images and videos. Recently, the use of seam-carving, a content-aware resizing method, has been proposed to anonymize the source camera of images against the well-known photoresponse nonuniformity (PRNU)-based source attribution technique. In this paper, we provide an analysis of the seam-carving-based source camera anonymization method by determining the limits of its performance introducing two adversarial models. Our analysis shows that the effectiveness of the deanonymization attacks depend on various factors that include the parameters of the seam-carving method, strength of the PRNU noise pattern of the camera, and an adversary's ability to identify uncarved image blocks in a seam-carved image. Our results show that, for the general case, there should not be many uncarved blocks larger than the size of 50×50 pixels for successful anonymization of the source camera.

Banerjee, D., Bo Dong, Biswas, S., Taghizadeh, M..  2014.  Privacy-preserving channel access using blindfolded packet transmissions. Communication Systems and Networks (COMSNETS), 2014 Sixth International Conference on. :1-8.

This paper proposes a novel wireless MAC-layer approach towards achieving channel access anonymity. Nodes autonomously select periodic TDMA-like time-slots for channel access by employing a novel channel sensing strategy, and they do so without explicitly sharing any identity information with other nodes in the network. An add-on hardware module for the proposed channel sensing has been developed and the proposed protocol has been implemented in Tinyos-2.x. Extensive evaluation has been done on a test-bed consisting of Mica2 hardware, where we have studied the protocol's functionality and convergence characteristics. The functionality results collected at a sniffer node using RSSI traces validate the syntax and semantics of the protocol. Experimentally evaluated convergence characteristics from the Tinyos test-bed were also found to be satisfactory.

Jun-Yong Lee, Hyoung-Gook Kim.  2014.  Audio fingerprinting to identify TV commercial advertisement in real-noisy environment. Communications and Information Technologies (ISCIT), 2014 14th International Symposium on. :527-530.

This paper proposes a high-performance audio fingerprint extraction method for identifying TV commercial advertisement. In the proposed method, a salient audio peak pair fingerprints based on constant Q transform (CQT) are hashed and stored, to be efficiently compared to one another. Experimental results confirm that the proposed method is quite robust in different noise conditions and improves the accuracy of the audio fingerprinting system in real noisy environments.

Hui Zeng, Tengfei Qin, Xiangui Kang, Li Liu.  2014.  Countering anti-forensics of median filtering. Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on. :2704-2708.

The statistical fingerprints left by median filtering can be a valuable clue for image forensics. However, these fingerprints may be maliciously erased by a forger. Recently, a tricky anti-forensic method has been proposed to remove median filtering traces by restoring images' pixel difference distribution. In this paper, we analyze the traces of this anti-forensic technique and propose a novel counter method. The experimental results show that our method could reveal this anti-forensics effectively at low computation load. According to our best knowledge, it's the first work on countering anti-forensics of median filtering.

Shuai Yi, Xiaogang Wang.  2014.  Profiling stationary crowd groups. Multimedia and Expo (ICME), 2014 IEEE International Conference on. :1-6.

Detecting stationary crowd groups and analyzing their behaviors have important applications in crowd video surveillance, but have rarely been studied. The contributions of this paper are in two aspects. First, a stationary crowd detection algorithm is proposed to estimate the stationary time of foreground pixels. It employs spatial-temporal filtering and motion filtering in order to be robust to noise caused by occlusions and crowd clutters. Second, in order to characterize the emergence and dispersal processes of stationary crowds and their behaviors during the stationary periods, three attributes are proposed for quantitative analysis. These attributes are recognized with a set of proposed crowd descriptors which extract visual features from the results of stationary crowd detection. The effectiveness of the proposed algorithms is shown through experiments on a benchmark dataset.