Visible to the public Biblio

Filters: Keyword is compressive sensing  [Clear All Filters]
2021-04-27
Chen, Q., Chen, D., Gong, J..  2020.  Weighted Predictive Coding Methods for Block-Based Compressive Sensing of Images. 2020 3rd International Conference on Unmanned Systems (ICUS). :587–591.
Compressive sensing (CS) is beneficial for unmanned reconnaissance systems to obtain high-quality images with limited resources. The existing prediction methods for block-based compressive sensing of images can be regarded as the particular coefficients of weighted predictive coding. To find better prediction coefficients for BCS, this paper proposes two weighted prediction methods. The first method converts the prediction model of measurements into a prediction model of image blocks. The prediction weights are obtained by training the prediction model of image blocks offline, which avoiding the influence of the sampling rates on the prediction model of measurements. Another method is to calculate the prediction coefficients adaptively based on the average energy of measurements, which can adjust the weights based on the measurements. Compared with existing methods, the proposed prediction methods for BCS of images can further improve the reconstruction image quality.
Stanković, I., Brajović, M., Daković, M., Stanković, L., Ioana, C..  2020.  Quantization Effect in Nonuniform Nonsparse Signal Reconstruction. 2020 9th Mediterranean Conference on Embedded Computing (MECO). :1–4.
This paper examines the influence of quantization on the compressive sensing theory applied to the nonuniformly sampled nonsparse signals with reduced set of randomly positioned measurements. The error of the reconstruction will be generalized to exact expected squared error expression. The aim is to connect the generalized random sampling strategy with the quantization effect, finding the resulting error of the reconstruction. Small sampling deviations correspond to the imprecisions of the sampling strategy, while completely random sampling schemes causes large sampling deviations. Numerical examples provide an agreement between the statistical results and theoretical values.
Manchanda, R., Sharma, K..  2020.  A Review of Reconstruction Algorithms in Compressive Sensing. 2020 International Conference on Advances in Computing, Communication Materials (ICACCM). :322–325.
Compressive Sensing (CS) is a promising technology for the acquisition of signals. The number of measurements is reduced by using CS which is needed to obtain the signals in some basis that are compressible or sparse. The compressible or sparse nature of the signals can be obtained by transforming the signals in some domain. Depending on the signals sparsity signals are sampled below the Nyquist sampling criteria by using CS. An optimization problem needs to be solved for the recovery of the original signal. Very few studies have been reported about the reconstruction of the signals. Therefore, in this paper, the reconstruction algorithms are elaborated systematically for sparse signal recovery in CS. The discussion of various reconstruction algorithms in made in this paper will help the readers in order to understand these algorithms efficiently.
K, S., Devi, K. Suganya, Srinivasan, P., Dheepa, T., Arpita, B., singh, L. Dolendro.  2020.  Joint Correlated Compressive Sensing based on Predictive Data Recovery in WSNs. 2020 International Conference on Emerging Trends in Information Technology and Engineering (ic-ETITE). :1–5.
Data sampling is critical process for energy constrained Wireless Sensor Networks. In this article, we proposed a Predictive Data Recovery Compressive Sensing (PDR-CS) procedure for data sampling. PDR-CS samples data measurements from the monitoring field on the basis of spatial and temporal correlation and sparse measurements recovered at the Sink. Our proposed algorithm, PDR-CS extends the iterative re-weighted -ℓ1(IRW - ℓ1) minimization and regularization on the top of Spatio-temporal compressibility for enhancing accuracy of signal recovery and reducing the energy consumption. The simulation study shows that from the less number of samples are enough to recover the signal. And also compared with the other compressive sensing procedures, PDR-CS works with less time.
2020-10-26
Uyan, O. Gokhan, Gungor, V. Cagri.  2019.  Lifetime Analysis of Underwater Wireless Networks Concerning Privacy with Energy Harvesting and Compressive Sensing. 2019 27th Signal Processing and Communications Applications Conference (SIU). :1–4.
Underwater sensor networks (UWSN) are a division of classical wireless sensor networks (WSN), which are designed to accomplish both military and civil operations, such as invasion detection and underwater life monitoring. Underwater sensor nodes operate using the energy provided by integrated limited batteries, and it is a serious challenge to replace the battery under the water especially in harsh conditions with a high number of sensor nodes. Here, energy efficiency confronts as a very important issue. Besides energy efficiency, data privacy is another essential topic since UWSN typically generate delicate sensing data. UWSN can be vulnerable to silent positioning and listening, which is injecting similar adversary nodes into close locations to the network to sniff transmitted data. In this paper, we discuss the usage of compressive sensing (CS) and energy harvesting (EH) to improve the lifetime of the network whilst we suggest a novel encryption decision method to maintain privacy of UWSN. We also deploy a Mixed Integer Programming (MIP) model to optimize the encryption decision cases which leads to an improved network lifetime.
2020-09-14
Anselmi, Nicola, Poli, Lorenzo, Oliveri, Giacomo, Rocca, Paolo, Massa, Andrea.  2019.  Dealing with Correlation and Sparsity for an Effective Exploitation of the Compressive Processing in Electromagnetic Inverse Problems. 2019 13th European Conference on Antennas and Propagation (EuCAP). :1–4.
In this paper, a novel method for tomographic microwave imaging based on the Compressive Processing (CP) paradigm is proposed. The retrieval of the dielectric profiles of the scatterers is carried out by efficiently solving both the sampling and the sensing problems suitably formulated under the first order Born approximation. Selected numerical results are presented in order to show the improvements provided by the CP with respect to conventional compressive sensing (CSE) approaches.
2019-12-10
Ponuma, R, Amutha, R, Haritha, B.  2018.  Compressive Sensing and Hyper-Chaos Based Image Compression-Encryption. 2018 Fourth International Conference on Advances in Electrical, Electronics, Information, Communication and Bio-Informatics (AEEICB). :1-5.

A 2D-Compressive Sensing and hyper-chaos based image compression-encryption algorithm is proposed. The 2D image is compressively sampled and encrypted using two measurement matrices. A chaos based measurement matrix construction is employed. The construction of the measurement matrix is controlled by the initial and control parameters of the chaotic system, which are used as the secret key for encryption. The linear measurements of the sparse coefficients of the image are then subjected to a hyper-chaos based diffusion which results in the cipher image. Numerical simulation and security analysis are performed to verify the validity and reliability of the proposed algorithm.

Sun, Jie, Yu, Jiancheng, Zhang, Aiqun, Song, Aijun, Zhang, Fumin.  2018.  Underwater Acoustic Intensity Field Reconstruction by Kriged Compressive Sensing. Proceedings of the Thirteenth ACM International Conference on Underwater Networks & Systems. :5:1-5:8.

This paper presents a novel Kriged Compressive Sensing (KCS) approach for the reconstruction of underwater acoustic intensity fields sampled by multiple gliders following sawtooth sampling patterns. Blank areas in between the sampling trajectories may cause unsatisfying reconstruction results. The KCS method leverages spatial statistical correlation properties of the acoustic intensity field being sampled to improve the compressive reconstruction process. Virtual data samples generated from a kriging method are inserted into the blank areas. We show that by using the virtual samples along with real samples, the acoustic intensity field can be reconstructed with higher accuracy when coherent spatial patterns exist. Corresponding algorithms are developed for both unweighted and weighted KCS methods. By distinguishing the virtual samples from real samples through weighting, the reconstruction results can be further improved. Simulation results show that both algorithms can improve the reconstruction results according to the PSNR and SSIM metrics. The methods are applied to process the ocean ambient noise data collected by the Sea-Wing acoustic gliders in the South China Sea.

2018-08-23
Birch, G. C., Woo, B. L., LaCasse, C. F., Stubbs, J. J., Dagel, A. L..  2017.  Computational optical physical unclonable functions. 2017 International Carnahan Conference on Security Technology (ICCST). :1–6.

Physical unclonable functions (PUFs) are devices which are easily probed but difficult to predict. Optical PUFs have been discussed within the literature, with traditional optical PUFs typically using spatial light modulators, coherent illumination, and scattering volumes; however, these systems can be large, expensive, and difficult to maintain alignment in practical conditions. We propose and demonstrate a new kind of optical PUF based on computational imaging and compressive sensing to address these challenges with traditional optical PUFs. This work describes the design, simulation, and prototyping of this computational optical PUF (COPUF) that utilizes incoherent polychromatic illumination passing through an additively manufactured refracting optical polymer element. We demonstrate the ability to pass information through a COPUF using a variety of sampling methods, including the use of compressive sensing. The sensitivity of the COPUF system is also explored. We explore non-traditional PUF configurations enabled by the COPUF architecture. The double COPUF system, which employees two serially connected COPUFs, is proposed and analyzed as a means to authenticate and communicate between two entities that have previously agreed to communicate. This configuration enables estimation of a message inversion key without the calculation of individual COPUF inversion keys at any point in the PUF life cycle. Our results show that it is possible to construct inexpensive optical PUFs using computational imaging. This could lead to new uses of PUFs in places where electrical PUFs cannot be utilized effectively, as low cost tags and seals, and potentially as authenticating and communicating devices.

Li, Q., Xu, B., Li, S., Liu, Y., Cui, D..  2017.  Reconstruction of measurements in state estimation strategy against cyber attacks for cyber physical systems. 2017 36th Chinese Control Conference (CCC). :7571–7576.

To improve the resilience of state estimation strategy against cyber attacks, the Compressive Sensing (CS) is applied in reconstruction of incomplete measurements for cyber physical systems. First, observability analysis is used to decide the time to run the reconstruction and the damage level from attacks. In particular, the dictionary learning is proposed to form the over-completed dictionary by K-Singular Value Decomposition (K-SVD). Besides, due to the irregularity of incomplete measurements, sampling matrix is designed as the measurement matrix. Finally, the simulation experiments on 6-bus power system illustrate that the proposed method achieves the incomplete measurements reconstruction perfectly, which is better than the joint dictionary. When only 29% available measurements are left, the proposed method has generality for four kinds of recovery algorithms.

Lagunas, E., Rugini, L..  2017.  Performance of compressive sensing based energy detection. 2017 IEEE 28th Annual International Symposium on Personal, Indoor, and Mobile Radio Communications (PIMRC). :1–5.

This paper investigates closed-form expressions to evaluate the performance of the Compressive Sensing (CS) based Energy Detector (ED). The conventional way to approximate the probability density function of the ED test statistic invokes the central limit theorem and considers the decision variable as Gaussian. This approach, however, provides good approximation only if the number of samples is large enough. This is not usually the case in CS framework, where the goal is to keep the sample size low. Moreover, working with a reduced number of measurements is of practical interest for general spectrum sensing in cognitive radio applications, where the sensing time should be sufficiently short since any time spent for sensing cannot be used for data transmission on the detected idle channels. In this paper, we make use of low-complexity approximations based on algebraic transformations of the one-dimensional Gaussian Q-function. More precisely, this paper provides new closed-form expressions for accurate evaluation of the CS-based ED performance as a function of the compressive ratio and the Signal-to-Noise Ratio (SNR). Simulation results demonstrate the increased accuracy of the proposed equations compared to existing works.

2018-05-01
Zhao, H., Ren, J., Pei, Z., Cai, Z., Dai, Q., Wei, W..  2017.  Compressive Sensing Based Feature Residual for Image Steganalysis Detection. 2017 IEEE International Conference on Internet of Things (iThings) and IEEE Green Computing and Communications (GreenCom) and IEEE Cyber, Physical and Social Computing (CPSCom) and IEEE Smart Data (SmartData). :1096–1100.

Based on the feature analysis of image content, this paper proposes a novel steganalytic method for grayscale images in spatial domain. In this work, we firstly investigates directional lifting wavelet transform (DLWT) as a sparse representation in compressive sensing (CS) domain. Then a block CS (BCS) measurement matrix is designed by using the generalized Gaussian distribution (GGD) model, in which the measurement matrix can be used to sense the DLWT coefficients of images to reflect the feature residual introduced by steganography. Extensive experiments are showed that proposed scheme CS-based is feasible and universal for detecting stegography in spatial domain.

2018-01-16
Yamacc, M., Sankur, B., Cemgil, A. T..  2017.  Malicious users discrimination in organizec attacks using structured sparsity. 2017 25th European Signal Processing Conference (EUSIPCO). :266–270.

Communication networks can be the targets of organized and distributed attacks such as flooding-type DDOS attack in which malicious users aim to cripple a network server or a network domain. For the attack to have a major effect on the network, malicious users must act in a coordinated and time correlated manner. For instance, the members of the flooding attack increase their message transmission rates rapidly but also synchronously. Even though detection and prevention of the flooding attacks are well studied at network and transport layers, the emergence and wide deployment of new systems such as VoIP (Voice over IP) have turned flooding attacks at the session layer into a new defense challenge. In this study a structured sparsity based group anomaly detection system is proposed that not only can detect synchronized attacks, but also identify the malicious groups from normal users by jointly estimating their members, structure, starting and end points. Although we mainly focus on security on SIP (Session Initiation Protocol) servers/proxies which are widely used for signaling in VoIP systems, the proposed scheme can be easily adapted for any type of communication network system at any layer.

2017-12-28
Shafee, S., Rajaei, B..  2017.  A secure steganography algorithm using compressive sensing based on HVS feature. 2017 Seventh International Conference on Emerging Security Technologies (EST). :74–78.

Steganography is the science of hiding information to send secret messages using the carrier object known as stego object. Steganographic technology is based on three principles including security, robustness and capacity. In this paper, we present a digital image hidden by using the compressive sensing technology to increase security of stego image based on human visual system features. The results represent which our proposed method provides higher security in comparison with the other presented methods. Bit Correction Rate between original secret message and extracted message is used to show the accuracy of this method.

2017-12-12
Gilbert, Anna C., Li, Yi, Porat, Ely, Strauss, Martin J..  2017.  For-All Sparse Recovery in Near-Optimal Time. ACM Trans. Algorithms. 13:32:1–32:26.

An approximate sparse recovery system in ℓ1 norm consists of parameters k, ε, N; an m-by-N measurement Φ; and a recovery algorithm R. Given a vector, x, the system approximates x by &xwidehat; = R(Φ x), which must satisfy ‖ &xwidehat;-x‖1 ≤ (1+ε)‖ x - xk‖1. We consider the “for all” model, in which a single matrix Φ, possibly “constructed” non-explicitly using the probabilistic method, is used for all signals x. The best existing sublinear algorithm by Porat and Strauss [2012] uses O(ε−3klog (N/k)) measurements and runs in time O(k1 − αNα) for any constant α textgreater 0. In this article, we improve the number of measurements to O(ε − 2klog (N/k)), matching the best existing upper bound (attained by super-linear algorithms), and the runtime to O(k1+βpoly(log N,1/ε)), with a modest restriction that k ⩽ N1 − α and ε ⩽ (log k/log N)γ for any constants α, β, γ textgreater 0. When k ⩽ log cN for some c textgreater 0, the runtime is reduced to O(kpoly(N,1/ε)). With no restrictions on ε, we have an approximation recovery system with m = O(k/εlog (N/k)((log N/log k)γ + 1/ε)) measurements. The overall architecture of this algorithm is similar to that of Porat and Strauss [2012] in that we repeatedly use a weak recovery system (with varying parameters) to obtain a top-level recovery algorithm. The weak recovery system consists of a two-layer hashing procedure (or with two unbalanced expanders for a deterministic algorithm). The algorithmic innovation is a novel encoding procedure that is reminiscent of network coding and that reflects the structure of the hashing stages. The idea is to encode the signal position index i by associating it with a unique message mi, which will be encoded to a longer message m′i (in contrast to Porat and Strauss [2012] in which the encoding is simply the identity). Portions of the message m′i correspond to repetitions of the hashing, and we use a regular expander graph to encode the linkages among these portions. The decoding or recovery algorithm consists of recovering the portions of the longer messages m′i and then decoding to the original messages mi, all the while ensuring that corruptions can be detected and/or corrected. The recovery algorithm is similar to list recovery introduced in Indyk et al. [2010] and used in Gilbert et al. [2013]. In our algorithm, the messages \mi\ are independent of the hashing, which enables us to obtain a better result.

2017-09-15
Shi, Tianlin, Agostinelli, Forest, Staib, Matthew, Wipf, David, Moscibroda, Thomas.  2016.  Improving Survey Aggregation with Sparsely Represented Signals. Proceedings of the 22Nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. :1845–1854.

In this paper, we develop a new aggregation technique to reduce the cost of surveying. Our method aims to jointly estimate a vector of target quantities such as public opinion or voter intent across time and maintain good estimates when using only a fraction of the data. Inspired by the James-Stein estimator, we resolve this challenge by shrinking the estimates to a global mean which is assumed to have a sparse representation in some known basis. This assumption has lead to two different methods for estimating the global mean: orthogonal matching pursuit and deep learning. Both of which significantly reduce the number of samples needed to achieve good estimates of the true means of the data and, in the case of presidential elections, can estimate the outcome of the 2012 United States elections while saving hundreds of thousands of samples and maintaining accuracy.

Li, Zheng, Xia, Yuli, Ye, Ruiqi, Zhao, Junsuo.  2016.  Compressive Sensing for Space Image Compressing. Proceedings of the 2016 International Conference on Intelligent Information Processing. :23:1–23:5.

Compressive sensing is a new technique by which sparse signals are sampled and recovered from a few measurements. To address the disadvantages of traditional space image compressing methods, a complete new compressing scheme under the compressive sensing framework was developed in this paper. Firstly, in the coding stage, a simple binary measurement matrix was constructed to obtain signal measurements. Secondly, the input image was divided into small blocks. The image blocks then would be used as training sets to get a dictionary basis for sparse representation with learning algorithm. At last, sparse reconstruction algorithm was used to recover the original input image. Experimental results show that both the compressing rate and image recovering quality of the proposed method are high. Besides, as the computation cost is very low in the sampling stage, it is suitable for on-board applications in astronomy.

2017-02-21
Chen Bai, S. Xu, B. Jing, Miao Yang, M. Wan.  2015.  "Compressive adaptive beamforming in 2D and 3D ultrafast active cavitation imaging". 2015 IEEE International Ultrasonics Symposium (IUS). :1-4.

The ultrafast active cavitation imaging (UACI) based on plane wave can be implemented with high frame rate, in which adaptive beamforming technique was introduced to enhance resolutions and signal-to-noise ratio (SNR) of images. However, regular adaptive beamforming continuously updates the spatial filter for each sample point, which requires a huge amount of calculation, especially in the case of a high sampling rate, and, moreover, 3D imaging. In order to achieve UACI rapidly with satisfactory resolution and SNR, this paper proposed an adaptive beamforming on the basis of compressive sensing (CS), which can retain the quality of adaptive beamforming but reduce the calculating amount substantially. The results of simulations and experiments showed that comparing with regular adaptive beamforming, this new method successfully achieved about eightfold in time consuming.

A. Dutta, R. K. Mangang.  2015.  "Analog to information converter based on random demodulation". 2015 International Conference on Electronic Design, Computer Networks Automated Verification (EDCAV). :105-109.

With the increase in signal's bandwidth, the conventional analog to digital converters (ADCs), operating on the basis of Shannon/Nyquist theorem, are forced to work at very high rates leading to low dynamic range and high power consumptions. This paper here tells about one Analog to Information converter developed based on compressive sensing techniques. The high sampling rates, which is the main drawback for ADCs, is being successfully reduced to 4 times lower than the conventional rates. The system is also accompanied with the advantage of low power dissipation.

Z. Zhu, M. B. Wakin.  2015.  "Wall clutter mitigation and target detection using Discrete Prolate Spheroidal Sequences". 2015 3rd International Workshop on Compressed Sensing Theory and its Applications to Radar, Sonar and Remote Sensing (CoSeRa). :41-45.

We present a new method for mitigating wall return and a new greedy algorithm for detecting stationary targets after wall clutter has been cancelled. Given limited measurements of a stepped-frequency radar signal consisting of both wall and target return, our objective is to detect and localize the potential targets. Modulated Discrete Prolate Spheroidal Sequences (DPSS's) form an efficient basis for sampled bandpass signals. We mitigate the wall clutter efficiently within the compressive measurements through the use of a bandpass modulated DPSS basis. Then, in each step of an iterative algorithm for detecting the target positions, we use a modulated DPSS basis to cancel nearly all of the target return corresponding to previously selected targets. With this basis, we improve upon the target detection sensitivity of a Fourier-based technique.

I. Ilhan, A. C. Gurbuz, O. Arikan.  2015.  "Sparsity based robust Stretch Processing". 2015 IEEE International Conference on Digital Signal Processing (DSP). :95-99.

Strecth Processing (SP) is a radar signal processing technique that provides high-range resolution with processing large bandwidth signals with lower rate Analog to Digital Converter(ADC)s. The range resolution of the large bandwidth signal is obtained through looking into a limited range window and low rate ADC samples. The target space in the observed range window is sparse and Compressive sensing(CS) is an important tool to further decrease the number of measurements and sparsely reconstruct the target space for sparse scenes with a known basis which is the Fourier basis in the general application of SP. Although classical CS techniques might be directly applied to SP, due to off-grid targets reconstruction performance degrades. In this paper, applicability of compressive sensing framework and its sparse signal recovery techniques to stretch processing is studied considering off-grid cases. For sparsity based robust SP, Perturbed Parameter Orthogonal Matching Pursuit(PPOMP) algorithm is proposed. PPOMP is an iterative technique that estimates off-grid target parameters through a gradient descent. To compute the error between actual and reconstructed parameters, Earth Movers Distance(EMD) is used. Performance of proposed algorithm are compared with classical CS and SP techniques.

2015-05-01
Hong Jiang, Songqing Zhao, Zuowei Shen, Wei Deng, Wilford, P.A., Haimi-Cohen, R..  2014.  Surveillance video analysis using compressive sensing with low latency. Bell Labs Technical Journal. 18:63-74.

We propose a method for analysis of surveillance video by using low rank and sparse decomposition (LRSD) with low latency combined with compressive sensing to segment the background and extract moving objects in a surveillance video. Video is acquired by compressive measurements, and the measurements are used to analyze the video by a low rank and sparse decomposition of a matrix. The low rank component represents the background, and the sparse component, which is obtained in a tight wavelet frame domain, is used to identify moving objects in the surveillance video. An important feature of the proposed low latency method is that the decomposition can be performed with a small number of video frames, which reduces latency in the reconstruction and makes it possible for real time processing of surveillance video. The low latency method is both justified theoretically and validated experimentally.