Visible to the public Multidimensional Signal Processing 2015 (Part 2)Conflict Detection Enabled

SoS Newsletter- Advanced Book Block


SoS Logo

Multidimensional Signal Processing

2015 (Part 2)


Multidimensional signal processing research deals with issues such as those arising in automatic target detection and recognition, geophysical inverse problems, and medical estimation problems. Its goal is to develop methods to extract information from diverse data sources amid uncertainty. Research cited here was presented in 2015.

C. Djiongo, S. M. Mpong and O. Monga, “Estimation of Aboveground Biomass from Satellite Data Using Quaternion-Based Texture Analysis of Multi Chromatic Images,” 2015 11th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS), Bangkok, 2015, pp. 68-75. doi: 10.1109/SITIS.2015.97
Abstract: In recent years, first approaches using quaternion numbers to handle and model multi chromatic images in a holistic manner were introduced. By defining quaternion Fourier transform, multidimensional data such as color images can be efficiently and easily process. On the other hand, multi chromatic satellite data appear as a primary source for measuring past trends and monitoring changes in forest carbon stocks. Thus, the processing of these data represents a fundamental challenge. In this work, inspired by the quaternion Fourier transforms, we propose a texture-color descriptor to extract relevant information from multi chromatic satellite images. We also propose a quaternion-based texture model, named FOTO++, to address the aboveground biomass estimation issue. Our proposed model begins by removing noises in the multi chromatic data while preserving the edges of canopies. After that, color texture indices are extracted using discrete form of Quaternion Fourier Transform and finally support vector regression method is used to derive biomass estimation from texture indices. Our texture features are modeled by a vector composed by the radial spectrum coming from the amplitude of quaternion Fourier Transform. We conduct several experiments in order the study the sensitivity of our model to acquisition parameters. We also assess its performances both on synthetic images and on real multi chromatic images of Cameroonian forest. The results provided support that our model is more robust to acquisition parameters than the classical Fourier Texture Ordination model and it is more accurate for aboveground biomass estimates. We stress that similar methodology could be used with quaternion wavelets. These results highlight the potential of quaternion-based approach to study multi chromatic images.
Keywords: Fourier transforms; chromatography; forestry; geophysical techniques; regression analysis; rocks; support vector machines; Cameroonian forest; aboveground biomass estimation; classical Fourier texture ordination model; color images; color texture indices; forest carbon stocks; multichromatic satellite data; multichromatic satellite images; quaternion Fourier transform; quaternion numbers; quaternion wavelets; quaternion-based texture analysis; radial spectrum; support vector regression method; texture-color descriptor; Biological system modeling; Biomass; Color; Estimation; Quaternions; Satellites; aboveground biomass; color image; color-texture; discrete quaternion Fourier transform; multi chromatic satellite image (ID#: 16-9848)


N. Udayanga, A. Madanayake, C. Wijenayake and R. Acosta, “Applebaum Adaptive Array Apertures with 2-D IIR Space-Time Circuit-Network Resonant Pre-Filters,” 2015 IEEE Radar Conference (RadarCon), Arlington, VA, 2015, pp. 0611-0615. doi: 10.1109/RADAR.2015.7131070
Abstract: A modification to the well-known Applebaum adaptive beamformer is proposed employing a low complexity space-time network-resonant IIR beamfiler. The network-resonant filter is a multiple input multiple output (MIMO) linear multidimensional recursive discrete system. It is applied as a pre-filter to an Applebaum adaptive beamformer in order to perform highly directional receive mode beamforming with improved noise and interference rejection. The spatial selectivity (directivity) of the Applebaum beamfomer is enhanced by introducing complex-manifolds from the 2-D IIR beamfilter to the zero-manifold-only transfer function of the adaptive beamformer. This leads to the proposed network-resonant adaptive Applebaum array, which shows angle-dependent levels of additional improvement of output SINR (best case, up to 12 dB improvements, near the end-fire direction). The ability to increase the improvement of the output SINR by 0-12 dB compared the best available adaptive Applebaum beamformer without using additional antenna elements in the array is a useful feature of the scheme. The proposed network-resonant Applebaum adaptive beamformer can be implemented using an upgrade to the digital signal processor without change to the array or RF electronics.
Keywords: IIR filters; MIMO communication; adaptive signal processing; array signal processing; signal denoising; two-dimensional digital filters; 2D IIR space-time circuit-network resonant pre-filters; MIMO system; RF electronics; angle-dependent levels; applebaum adaptive array apertures; applebaum adaptive beamformer; complex-manifolds; digital signal processor; directional receive mode beamforming; improved noise rejection; interference rejection; low complexity space-time network-resonant IIR beamfiler; multiple input multiple output linear multidimensional recursive discrete system; output SINR; spatial selectivity; zero-manifold-only transfer function; Adaptive arrays; Array signal processing; Arrays; Interference; Signal to noise ratio; Transfer functions; 2-D IIR beamfilter; Antenna arrays; Applebaum beamformer; SINR improvement (ID#: 16-9849)


A. Wicenec, “From Antennas to Multi-Dimensional Data Cubes: The SKA Data Path,” 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), South Brisbane, QLD, 2015, pp. 5645-5649. doi: 10.1109/ICASSP.2015.7179052
Abstract: The SKA baseline design defines three independent radio antenna arrays producing vast amounts of data. In order to arrive at, still big, but more manageable data volumes and rates, the information will be processed on-line to arrive at science ready products. This requires a direct network interface between the correlators and dedicated world-class HPC facilities. Due to the remoteness of the two SKA sites, power as well as the availability of maintenance staff will be but two of the limiting factors for the operation of the arrays. Thus the baseline design keeps just the actual core signal processing close to the center of the arrays, the on-line HPC data reduction will be located in Perth and Cape Town, respectively. This paper presents an outline of the complete digital data path starting at the digitiser outputs and ending in the data dissemination and science post-processing, with a focus on the data management aspects within the Science Data Processor (SDP) element, responsible for the post-correlator signal processing and data reduction.
Keywords: antenna arrays; array signal processing; astronomy computing; data handling; radioastronomical techniques; SDP element; SKA data path; multidimensional data cubes; online HPC data reduction; radio antenna arrays; radio astronomy; science data processor element; Array signal processing; Arrays; Australia; Correlators; Pipelines; Radio astronomy; Standards; Radio Astronomy; Square Kilometre Array; data management; data processing (ID#: 16-9850)


R. Rademacher, J. A. Jackson, A. Rexford and C. S. Kabban, “Quadrature-Based Credible Set Estimation for Radar Feature Extraction,” 2015 IEEE Radar Conference (RadarCon), Arlington, VA, 2015, pp. 1027-1032. doi: 10.1109/RADAR.2015.7131145
Abstract: Efficient and accurate extraction of physically-relevant features from measured radar data is desirable for automatic target recognition (ATR). In this paper, we present an estimation technique to find credible sets of parameters for any given feature model. The proposed approach provides parameter estimates along with confidence values. Maximum a posteriori (MAP) estimates provide a single (vector) parameter value, typically found via sampling methods. However, computational inefficiency and inaccuracy issues commonly arise when sampling multi-modal or multi-dimensional posteriors. As an alternative, we use Gaussian quadrature to compute probability mass functions, covering the entire probability space. An efficient zoom-in approach is used to iteratively locate regions of high probability. The (possibly disjoint) regions of high probability correspond to sets of feasible parameter values, call credible sets. Thus, our quadrature-based credible set estimator (QBCSE) includes values very near the true parameter and confuser values that may lie far from the true parameter but map with high probability to the same observed data. The credible set and associated probabilities are computed and should both be passed to an ATR algorithm for informed decision-making. Applicable to any feature model, we demonstrate the proposed QBCSE scheme using canonical shape feature models in synthetic aperture radar phase history.
Keywords: Gaussian processes; decision making; feature extraction; maximum likelihood estimation; radar target recognition; signal sampling; synthetic aperture radar; vectors; ATR algorithm; Gaussian quadrature; MAP estimation; QBCSE scheme; automatic target recognition; call credible sets; canonical shape feature models; confidence values; decision-making; feasible parameter values; maximum a posteriori estimation; multidimensional posteriors; multimodal posteriors; physically-relevant features; probability mass functions; probability space; quadrature-based credible set estimation; radar feature extraction; sampling methods; single parameter value; synthetic aperture radar phase history; vector; zoom-in approach; Accuracy; Estimation; Feature extraction; Graphics processing units; Probability density function; Radar; Shape; Bayesian estimation; credible set; quadrature (ID#: 16-9851)


H. L. Kennedy, “Parallel Software Implementation of Recursive Multidimensional Digital Filters for Point-Target Detection in Cluttered Infrared Scenes,” 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), South Brisbane, QLD, 2015, pp. 1086-1090. doi: 10.1109/ICASSP.2015.7178137
Abstract: A technique for the enhancement of point targets in clutter is described. The local 3-D spectrum at each pixel is estimated recursively. An optical flow-field for the textured background is then generated using the 3-D autocorrelation function and the local velocity estimates are used to apply high-pass velocity-selective spatiotemporal filters, with finite impulse responses (FIRs), to subtract the background clutter signal, leaving the foreground target signal, plus noise. Parallel software implementations using a multicore central processing unit (CPU) and a graphical processing unit (GPU) are investigated.
Keywords: filtering theory; image sequences; object detection; 3D autocorrelation function; CPU; GPU; cluttered infrared scenes; finite impulse responses; graphical processing unit; high-pass velocity-selective spatiotemporal filters; local 3D spectrum; local velocity estimates; multicore central processing unit; optical flow-field; point-target detection; recursive multidimensional digital filters; Central Processing Unit; Digital filters; Discrete Fourier transforms; Filter banks; Graphics processing units; Optical filters; Spatiotemporal phenomena; Digital filter; Image processing; Multithreading; Optical flow; Recursive spectrum; Whitening (ID#: 16-9852)


T. F. de Lima, A. N. Tait, M. A. Nahmias, B. J. Shastri and P. R. Prucnal, “Improved Spectral Sensing in Cognitive Radios Using Photonic-Based Principal Component Analysis,” Signal Processing and Communication Systems (ICSPCS), 2015 9th International Conference on, Cairns, QLD, 2015, pp. 1-5. doi: 10.1109/ICSPCS.2015.7391750
Abstract: We propose and experimentally demonstrate a microwave photonic system that iteratively performs principal component analysis on partially correlated, 8-channel, 13 Gbaud signals. The system that is presented is able to adapt to oscillations in interchannel correlations and follow changing principal components. The system provides advantages in bandwidth performance and fan-in scalability that are far superior to electronic counterparts. Wideband, multidimensional techniques are relevant to >10 GHz cognitive radio systems and could bring solutions for intelligent radio communications and information sensing, including spectral sensing.
Keywords: cognitive radio; microwave photonics; principal component analysis; radio spectrum management; signal processing; 8-channel signals; cognitive radios; interchannel correlations; microwave photonic system; partially correlated signals; photonic-based principal component analysis; spectral sensing; Bandwidth; Correlation; Microwave communication; Microwave filters; Microwave photonics; Principal component analysis; Analog Signal Processing; Microwave Photonics; RF Photonics (ID#: 16-9853)


M. Darwish, P. Cox, G. Pillonetto and R. Tóth, “Bayesian Identification of LPV Box-Jenkins Models,” 2015 54th IEEE Conference on Decision and Control (CDC), Osaka, 2015, pp. 66-71. doi: 10.1109/CDC.2015.7402087
Abstract: In this paper, we introduce a nonparametric approach in a Bayesian setting to efficiently estimate, both in the stochastic and computational sense, linear parameter-varying (LPV) input-output models under general noise conditions of Box-Jenkins (BJ) type. The approach is based on the estimation of the one-step-ahead predictor model of general LPV-BJ structures, where the sub-predictors associated with the input and output signals are captured as asymptotically stable infinite impulse response models (IIRs). These IIR sub-predictors are identified in a completely nonparametric sense, where not only the coefficients are estimated as functions, but also the whole time evolution of the impulse response is estimated as a function. In this Bayesian setting, the one-step-ahead predictor is modelled as a zero-mean Gaussian random field, where the covariance function is a multidimensional Gaussian kernel that encodes both the possible structural dependencies and the stability of the predictor. The unknown hyperparameters that parameterize the kernel are tuned using the empirical Bayes approach, i.e., optimization of the marginal likelihood with respect to available data. It is also shown that, in case the predictor has a finite order, i.e., the true system has an ARX noise structure, our approach is able to recover the underlying structural dependencies. The performance of the identification method is demonstrated on LPV-ARX and LPV-BJ simulation examples by means of a Monte Carlo study.
Keywords: Bayes methods; Gaussian processes; asymptotic stability; autoregressive moving average processes; linear parameter varying systems; parameter estimation; random processes; signal denoising; transient response; ARX noise structure; Bayesian identification; IIR subpredictor identification; LPV Box-Jenkins models; LPV-ARX simulation example; LPV-BJ simulation example; Monte Carlo study; asymptotic infinite impulse response model stability; covariance function; empirical Bayes approach; general LPV-BJ structures; impulse response; input signals; linear parameter-varying input-output models; multidimensional Gaussian kernel; nonparametric approach; one-step-ahead predictor model; output signals; time evolution; unknown hyperparameters; zero-mean Gaussian random field; Asymptotic stability; Bayes methods; Computational modeling; Estimation; Kernel; Optimization; Predictive models (ID#: 16-9854)


Y. Wu, G. Wen, F. Gao and Y. Fan, “Superpixel Regions Extraction for Target Detection,” Signal Processing, Communications and Computing (ICSPCC), 2015 IEEE International Conference on, Ningbo, 2015, pp. 1-5. doi: 10.1109/ICSPCC.2015.7338955
Abstract: In this paper, an algorithm of target region detection is proposed based on superpixel segmentation in the field of computer vision which is imported to high-resolution remote sensing images for superpixel-level rather than pixel-level target detection. For the problem of massive data, redundant information and time-consuming targets searching of high-resolution remote sensing images with complex scene and large size, the region of interest (ROI) extraction strategy based on a visual saliency map detection is adopted. Second, the multidimensional description vector of local feature is constructed via superpixels obtained from simple linear iterative clustering (SLIC). Third, combine with the prior information of the target to determine the threshold of feature, from which we select the candidate superpixels belong to target. Experimental results show that the proposed algorithm is more effective in high-resolution remote sensing images, overcoming the situation of complex background interference and robust to the target rotation. In addition, the proposed algorithm performs favorably against the traditional sliding windows search algorithm. On one hand, significantly reduces the computing complexity of the search space, and achieves the data dimensionality reduction. On the other hand, it brings lower false probability and improves the detection accuracy.
Keywords: feature extraction; image resolution; image segmentation; iterative methods; military systems; object detection; remote sensing; complex background interference; computer vision; data dimensionality reduction; false probability; high-resolution remote sensing images; local feature; multidimensional description vector; pixel-level target detection; search space; simple linear iterative clustering; superpixel regions extraction; superpixel segmentation; superpixel-level; target region detection target rotation; traditional sliding windows search algorithm; visual saliency map detection; Aircraft; Feature extraction; Image color analysis; Image segmentation; Object detection; Remote sensing; Shape; Remote sensing images; target detection; visual saliency (ID#: 16-9855)


B. Xuhui, C. Zihao and H. Zhongsheng, “Quantized Feedback Control for a Class of 2-D Systems with Missing Measurements,” Control Conference (CCC), 2015 34th Chinese, Hangzhou, 2015, pp. 3073-3078. doi: 10.1109/ChiCC.2015.7260113
Abstract: In this paper, the quantized feedback control problem is investigated for a class of network-based 2-D systems described by Roesser model with data missing. It is assume that the states of the controlled system are available and there are quantized by logarithmic quantizer before being communicated. Moreover, the data missing phenomena is modeled by a Bernoulli distributed stochastic variable taking values of 1 and 0. A sufficient condition is derived in virtue of the method of sector-bounded uncertainties, which guarantees that the closed-loop system is stochastically stable. Based on the condition, quantized feedback controller can be designed by using linear matrix inequalities technique. The simulation example is given to illustrate the proposed method.
Keywords: H control; closed loop systems; control system synthesis; feedback; linear matrix inequalities; multidimensional systems; networked control systems; stability; stochastic processes; uncertain systems; Bernoulli distributed stochastic variable; Roesser model; closed-loop system; data missing phenomenon; linear matrix inequalities technique; logarithmic quantizer; missing measurement; network-based 2D system; quantized H control problem; quantized feedback control problem; quantized feedback controller design; sector-bounded uncertainties; stochastic stability; sufficient condition; Asymptotic stability; Closed loop systems; Feedback control; Quantization (signal); Sufficient conditions; Symmetric matrices; 2-D systems; missing measurements; networked control systems; quantized control (ID#: 16-9856)


R. Hu, W. Qi and Z. Guo, “Feature Reduction of Multi-Scale LBP for Texture Classification,” 2015 International Conference on Intelligent Information Hiding and Multimedia Signal Processing (IIH-MSP), Adelaide, SA, 2015, pp. 397-400.
doi: 10.1109/IIH-MSP.2015.79
Abstract: Local binary pattern (LBP) is a simple yet powerful texture descriptor modeling the relationship of pixels to their local neighborhood. By considering multiple neighborhood radii, multi-scale LBP (MS-LBP) is derived. For MS-LBP generation, different scales LBP histograms are first extracted separately, and then combined in concatenate or joint way, resulting in a one-dimensional or multi-dimensional histogram, respectively. Concatenate MS-LBP has low feature dimension but loses some important discriminative information, while joint MS-LBP performs well but suffers high feature dimension. In this work, based on the similarity between different scales patterns and the sparsity of joint MS-LBP histogram, a feature reduction method for joint MS-LBP is proposed. Experiments on Outex and CURet show that the proposed method and its extension have performance comparable to the original joint MS-LBP but have lower feature dimension.
Keywords: feature extraction; image classification; image texture; CURet; MS-LBP generation; Outex; feature reduction method; high-feature dimension; joint MS-LBP histogram; local binary pattern; local neighborhood; low-feature dimension; multidimensional histogram; multiscale LBP; neighborhood radii; one-dimensional histogram; pixel relationship; texture classification; texture descriptor modeling; Correlation; Databases; Feature extraction; Hamming distance; Histograms; Lighting; Robustness; Local binary pattern (LBP); feature reduction; multi-scale LBP (MSLBP) (ID#: 16-9857)


N. Asendorf and R. R. Nadakuditi, “Improved Estimation of Canonical Vectors in Canonical Correlation Analysis,” 2015 49th Asilomar Conference on Signals, Systems and Computers, Pacific Grove, CA, 2015, pp. 1806-1810. doi: 10.1109/ACSSC.2015.7421463
Abstract: Canonical Correlation Analysis (CCA) is a multidimensional algorithm for two datasets that finds linear transformations, called canonical vectors, that maximize the correlation between the transformed datasets. However, in the low-sample high-dimension regime these canonical vector estimates are extremely inaccurate. We use insights from random matrix theory to propose a new algorithm that can reliably estimate canonical vectors in the sample deficient regime. Through numerical simulations we showcase that our new algorithm is robust to both limited training data and overestimating the dimension of the signal subspaces.
Keywords: data analysis; statistical analysis; vectors; CCA; canonical correlation analysis; canonical vectors estimation; numerical simulation; random matrix theory; sample deficient regime; Correlation; Covariance matrices; Data models; Matrices; Signal processing algorithms; Sociology (ID#: 16-9858)


M. Ghamgui, D. Mehdi, O. Bachelier and F. Tadeo, “Lyapunov Theory for Continuous 2D Systems with Variable Delays: Application to Asymptotic and Exponential Stability,” 2015 4th International Conference on Systems and Control (ICSC), Sousse, 2015, pp. 367-371. doi: 10.1109/ICoSC.2015.7153308
Abstract: This paper deals with two dimensional (2D) systems with variable delays. More precisely, conditions are developed to study the asymptotic and exponential stability of 2D Roesser-like models with variable independent delays affecting the two directions. Based on proper definitions of 2D asymptotic and exponential stability, sufficient conditions are developed, expressed using Linear Matrix Inequalities, based on Lyapunov-Krasovskii functionals.
Keywords: Lyapunov methods; asymptotic stability; delay systems; linear matrix inequalities; multidimensional systems; 2D Roesser-like model; Lyapunov theory; Lyapunov-Krasovskii functional; continuous 2D system; exponential stability; linear matrix inequality; sufficient condition; two dimensional system; variable delay; variable independent delay; Asymptotic stability; Boundary conditions; Control theory; Delays; Signal processing; Stability analysis; 2D systems; Roesser model; variable delays (ID#: 16-9859)


J. Gao, L. Shen and D. Luo, “High Frequency HEMT Modeling Using Artificial Neural Network Technique,” 2015 IEEE MTT-S International Conference on Numerical Electromagnetic and Multiphysics Modeling and Optimization (NEMO), Ottawa, ON, 2015, pp. 1-3. doi: 10.1109/NEMO.2015.7415085
Abstract: Accurate high frequency modeling for active devices which includes microwave diodes and transistors are absolutely necessary for computer-aided radio frequency integrated circuit (RFIC) design. This paper aims to provide an overview on small signal and large signal for field effect transistor (FETs) based on the combination of the conventional equivalent circuit modeling and artificial neural network (ANN) modeling techniques. MLPs and Space-mapped neuromodeling techniques have been used for building a small signal model, and the adjoint technique as well as integration and differential techniques are used for building a large signal model. Experimental results, which confirm the validity of the approaches, are also presented.
Keywords: electronic engineering computing; equivalent circuits; high electron mobility transistors; neural nets; semiconductor device models; ANN modeling techniques; FET; HEMT modeling; MLP; RFIC design; artificial neural network modeling techniques; artificial neural network technique; computer-aided radio frequency integrated circuit design; differential techniques; equivalent circuit modeling; field effect transistor; integration techniques microwave diodes; microwave transistors; multilayer perceptrons; space-mapped neuromodeling techniques; Artificial neural networks; Computational modeling; HEMTs; Integrated circuit modeling; Microwave circuits; Microwave transistors; ANN; device; modeling (ID#: 16-9860)


F. Xue, J. Hu and J. Wang, “An Analysis Model for Spatial Information in Multi-Scales,” 2015 8th International Conference on Signal Processing, Image Processing and Pattern Recognition (SIP), Jeju, 2015, pp. 30-33. doi: 10.1109/SIP.2015.15
Abstract: Multi-Level Integrated analysis model of spatial information proposed in this article can offer more credible results than traditional regression models because of its ability disposing the hierarchical structure of data. In the model spatial process is taken as the process affected by inner and external effect of data and relations of spatial heterogeneity, spatial dependence and spatial scales are distinguished by two regressions, which conducive to dispose combined action formed by spatial dependence, spatial heterogeneity and spatial scale effect in multidimensional spatial analysis.
Keywords: data analysis; regression analysis; spatial data structures; data structure; multidimensional spatial analysis; multilevel integrated analysis model; regression model; spatial dependence; spatial heterogeneity; spatial information; spatial scale effect; Analytical models; Correlation; Data models; Economics; Information science; Mathematical model; Urban areas; Multi-Level Integrated analysis model (ID#: 16-9861)


R. K. Miranda, J. P. C. L. da Costa, F. Roemer, A. L. F. de Almeida and G. Del Galdo, “Generalized Sidelobe Cancellers for Multidimensional Separable Arrays,” Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP), 2015 IEEE 6th International Workshop on, Cancun, 2015, pp. 193-196. doi: 10.1109/CAMSAP.2015.7383769
Abstract: The usage of antenna arrays brought innumerable benefits to radio systems in the last decades. Arrays can have multidimensional structures that can be exploited to achieve superior performance and lower complexity. However, the literature has not explored yet all the advantages arising from these features. This paper uses tensors to provide a method to design efficient beamformers for multidimensional antenna arrays. In this work, the generalized sidelobe canceller (GSC) is extended to a multidimensional array to create the proposed R-Dimensional GSC (R-D GSC). The proposed scheme has a lower computational complexity and, under certain conditions, exhibits an improved signal to interference and noise ratio (SINR).
Keywords: antenna arrays; computational complexity; generalized sidelobe cancellers; multidimensional separable arrays; signal to interference and noise ratio; Antenna arrays; Array signal processing; Arrays; Interference; Signal to noise ratio; Tensile stress; Transmission line matrix methods (ID#: 16-9862)


X. Zhang, G. Wen and W. Dai, “Anomaly Detecting in Hyperspectral Imageries Based on Tensor Decomposition with Spectral and Spatial Partitioning,” 2015 8th International Congress on Image and Signal Processing (CISP), Shenyang, 2015, pp. 737-741. doi: 10.1109/CISP.2015.7407975
Abstract: Due to the multidimensional nature of the hyperspectral image (HSI), multi-way arrays (called tensor) are one of the possible solutions for analyzing such data. In tensor algebra, CANDECOMP/PARAFAC decomposition (CPD) is a popular tool which has been successfully applied for the HSI data processing. However, on the one hand, CPD requires large memory for temporal variables. As a result, the memory usually overflows during the process for a real HSI whose size is large. On the other hand, so far no finite algorithm can well-determine the rank of the tensor to be decomposed. An inappropriate number of the rank may over-fit/under-fit the information provided by the tensor. To deal with these problems, this paper proposes an improved CPD with spectral and spatial partitioning for the HSI anomaly detection. First, the original HSI is divided into a set of smaller-sized sub-tensors. Second, CPD is applied onto each sub-tensor. Then, an anomaly detection algorithm is implemented and the detection results are fused along the spectral direction. Experiments with a real HSI data set reveals that the proposed method outperforms the CPD with no partition and the traditional RX anomaly detector with better detection performance.
Keywords: algebra; hyperspectral imaging; CANDECOMP/PARAFAC decomposition; CPD; HSI anomaly detection; anomaly detection algorithm; finite algorithm; hyperspectral imageries; multiway arrays; spatial partitioning; spectral partitioning; tensor algebra; tensor decomposition; Algebra; Correlation; Hyperspectral imaging; Memory management; Spectral analysis; Tensile stress; CANDECOMP/PARAFAC decomposition (CPD); anomaly detection; hyperspectral image (HSI); spectral and spatial partitioning  (ID#: 16-9863)


R. Chen et al., “Research on Multi-Dimensional N-Back Task Induced EEG Variations,” 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Milan, 2015, pp. 5163-5166. doi:10.1109/EMBC.2015.7319554
Abstract: In order to test the effectiveness of multi-dimensional N-back task for inducing deeper brain fatigue, we conducted a series of N*L-back experiments: 1*1-back, 1*2-back, 2*1-back and 2*2-back tasks. We analyzed and compared the behavioral results, EEG variations and mutual information among these four different tasks. There was no significant difference in average EEG power and power spectrum entropy (PSE) among the tasks. However, the behavioral result of N*2-back task showed significant difference compared to traditional one dimensional N-back task. Connectivity changes were observed with the addition of one more matching task in N-back. We suggest that multi-dimensional N-back task consume more brain resources and activate different brain areas. These results provide a basis for multi-dimensional N-back tasks that can be used to induce deeper mental fatigue or exert more workload.
Keywords: bioelectric potentials; electroencephalography; entropy; medical signal processing; neurophysiology; average EEG power; brain areas; brain resources; deep brain fatigue; matching task; mental fatigue; multidimensional N-back task induced EEG variations; one dimensional N-back task; power spectrum entropy; Electroencephalography; Entropy; Fatigue; Image color analysis; Instruments; Mutual information; Yttrium (ID#: 16-9864)


Z. Yun, “The Study Of CDM-BSC-Based Data Mining Driven Fishbone Applied for Data Processing,” Signal Processing, Communications and Computing (ICSPCC), 2015 IEEE International Conference on, Ningbo, 2015, pp. 1-5. doi:10.1109/ICSPCC.2015.7338909
Abstract: Data Mining Driven Fishbone (DMDF), which is whole a new term, is an enhancement of abstractive conception of multidimensional-data flow of fishbone applied for data processing to optimize the process and structure of data management and data mining. CDM-BSC(CRISP-DM applied with Balance Scorecard), which is developed from combination of traditional Data Processing Methodology and BSC for performance measurement systems. End-to-end DMDF diagram includes complex dataflow and different processing component and improvements for numerous aspects in multiply level. Balance Scorecard applied to CRISP-DM is a new methodology of improving the performance of Information and Data Processing. CDM-BSC-based DMDF provides integrated platform and mixed methodology to support the whole life cycle of data processing with comprehensive methodology. Data preprocessing, data Classification, Association rule mining and Prediction are the foundation and linkage of the whole data processing life cycle. DMDF supports combination of different mining component from strategy level, tactical level to abstractive level, and then re-engineered data mining process into execution system to realize reasonable architecture. CDM-BSC-based DMDF is a new direction of the structure of large scale information and data processing.
Keywords: cause-effect analysis; data mining; CDM-BSC; CRISP-DM; DMDF; data classification; data management; data mining driven fishbone; data processing methodology; mixed methodology; multidimensional-data flow; Cause effect analysis; Data mining; Data preprocessing; Metadata; CDM-BSC (CRISP-DM applied with Balance Scorecard); DMDF (data mining driven fishbone); Data mining; Data mining process; Data processing (ID#: 16-9865)


S. Handagala, A. Madanayake and N. Udayanga, “Design of a Millimeter-Wave Dish-Antenna Based 3-D IIR Radar Digital Velocity Filter,” Multidimensional (nD) Systems (nDS), 2015 IEEE 9th International Workshop on, Vila Real, 2015, pp. 1-6. doi: 10.1109/NDS.2015.7332657
Abstract: The enhancement of radar signatures corresponding to an object traveling in a particular velocity is proposed. The method employs a parabolic dish and focal plane array (FPA) processor together with a network resonant multi-dimensional recursive digital velocity filter. An FPA-fed parabolic dish antenna creates multiple radio frequency (RF) beams. The RF beams can sense simulated moving objects that are illuminated using mm-wave (90 GHz) RF energy. A 3-D IIR digital velocity filter is applied on the simulated radar signals to enhance signatures that are moving at a direction of interest at a given speed while significantly suppressing undesired interfering signals traveling at other velocities and additive Gaussian noise. A dish of diameter 0.5 m and focal length of 30 cm with an FPA with 4096 antenna elements (64×64) arranged in a dense square array is simulated using an electromagnetic field simulator. The resulting electric field intensity profiles are processed to extract the signatures of interest. Simulation results show an average signal to interference improvement of 7 dB with single interference and 6 dB for multiple interference. Proposed method exhibits an average signal to interference and noise ratio (SINR) improvement of 6 dB for input SINR of -15 dB. All results are simulation based. No fabrications have been attempted at this point.
Keywords: AWGN; IIR filters; electromagnetic fields; focal planes; millimetre wave antennas; object tracking; radar signal processing; radiofrequency interference; 3D IIR radar digital velocity filter; FPA processor; FPA-fed parabolic dish antenna; SINR; additive Gaussian noise; antenna elements; electric field intensity; electromagnetic field simulator; focal plane array processor; frequency 90 GHz; millimeter wave dish antenna; mm-wave RF energy; network resonant multidimensional recursive digital velocity filter; object traveling; radar signatures; radio frequency beams; signal to interference and noise ratio; Antenna arrays; Arrays; Interference; Radar; Radar antennas; Radio frequency (ID#: 16-9866)


H. D. Lu, B. Y. Chen and F. M. Guo, “The Model Optimized of Mini Packaging for Quantum Dots Photodetector Readout,” Electronics Packaging and iMAPS All Asia Conference (ICEP-IACC), 2015 International Conference on, Kyoto, 2015, pp. 581-585. doi: 10.1109/ICEP-IAAC.2015.7111081
Abstract: The paper shows the research of readout model optimized and mini packaging for the quantum dots photodetector array. The genetic algorithms are used to quantum dots photodetector modeling for accurate readout photoelectric response signal. Three kinds different equivalent circuit model were compared each other and simulated with Cadence IC design software respectively. We developed CTIA readout structure for the quantum dots photodetector array, the readout noise and different substrate materials has simulated by ADS to minimize noise and interference. Two kinds of silicon interposer, namely via-with-one-line and via-with-four-line, have been compared, and demonstrate the via-with-four-line silicon interposer is better than via-with-one-line silicon. Other interposers such as PCB and ceramic interposers still reduce more crosstalk and suppress noise. We still designed the data acquisition and processing analysis unit system, providing Wi-Fi interface to communicate with the PC software to complete the tasks like data acquisition, digital filtering, spectral display, network communication, human-computer interaction etc. Based on high sensitivity of the quantum dots photodetector, the system integrated has more short integration time (10 us), lower noise, and better ability to resist overflow and large dynamic range.
Keywords: data acquisition; elemental semiconductors; packaging; photodetectors; quantum dots; readout electronics; silicon; wireless LAN; PC software; PCB; Si; Wi-Fi interface; ceramic interposers; crosstalk; data acquisition; digital filtering; equivalent circuit model; genetic algorithms; human-computer interaction; mini packaging; network communication; optimized model; quantum dots photodetector array; readout model; readout noise; readout photoelectric response signal; silicon interposer; spectral display; substrate materials; via-with-four-line silicon interposer; via-with-one-line silicon interposer; Arrays; Integrated circuit modeling; Noise; Packaging; Photodetectors; Quantum dots; Silicon;  mini packaging; miniature spectrometer; optimization model; photodetector (ID#: 16-9867)


J. Matamoros, M. Calvo-Fullana and C. Antón-Haro, “On the Impact of Correlated Sampling Processes in WSNs with Energy-Neutral Operation,” 2015 IEEE International Conference on Communications (ICC), London, 2015, pp. 258-263. doi: 10.1109/ICC.2015.7248331
Abstract: In this paper, we consider a communication scenario where multiple EH sensor nodes collect correlated measurements of an underlying random field. The nodes operate in an energy-neutral manner (i.e. energy is used as soon as it is harvested) and, hence, the energy-harvesting and sampling processes at the sensor nodes become inter-twined, random and spatially correlated. Under some mild assumptions, we derive the multidimensional linear filter which minimizes the mean square error in the reconstructed measurements at the Fusion Center (FC). We also analyze the impact of correlated and random sampling processes in the resulting distortion and, in order to gain some insight, we particularize the analysis to the case of fully correlated spatial fields and with an asymptotically large number of sensor nodes.
Keywords: correlation methods; mean square error methods; multidimensional digital filters; sensor fusion; signal sampling; wireless sensor networks; WSN; communication scenario; correlated sampling processes; energy-harvesting; energy-neutral operation; fusion center; intertwined correlation; mean square error minimization; multidimensional linear filter; multiple EH sensor nodes; random correlation; random field; random sampling processes; resulting distortion; spatially correlation; Batteries; Correlation; Distortion; Energy harvesting; Noise; Numerical models; Wireless sensor networks (ID#: 16-9868)


A. Al-nasheri et al., “Voice Pathology Detection with MDVP Parameters Using Arabic Voice Pathology Database,” Information Technology: Towards New Smart World (NSITNSW), 2015 5th National Symposium on, Riyadh, 2015, pp. 1-5. doi: 10.1109/NSITNSW.2015.7176431
Abstract: This paper investigates the use of Multi-Dimensional Voice Program (MDVP) parameters to automatically detect voice pathology in Arabic voice pathology database (AVPD). MDVP parameters are very popular among the physician / clinician to detect voice pathology; however, MDVP is a commercial software. AVPD is a newly developed speech database designed to suit a wide range of experiments in the field of automatic voice pathology detection, classification, and automatic speech recognition. This paper is the first step to evaluate MDVP parameters in AVPD using sustained vowel /a/. The experimental results demonstrate that some of the acoustic features show an excellent ability to discriminate between normal and pathological voices. The overall best accuracy is 81.33% by using SVM classifier.
Keywords: medical signal detection; signal classification; speech recognition; support vector machines; AVPD; Arabic voice pathology database; MDVP parameters; SVM classifier; acoustic features; automatic speech recognition; commercial software; multidimensional voice program; speech database; support vector machine; voice pathology detection; Accuracy; Acoustics; Databases; Pathology; Speech; Speech recognition; Support vector machines; MDVP; MEEI; SVM (ID#: 16-9869)


B. Liao and S. C. Chan, “A Simple Method for DOA Estimation in the Presence of Unknown Nonuniform Noise,” 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), South Brisbane, QLD, 2015, pp. 2789-2793. doi: 10.1109/ICASSP.2015.7178479
Abstract: When considering the problem of direction-of-arrival (DOA) estimation, uniform noise is often assumed and hence, the corresponding noise covariance matrix is diagonal and has identical diagonal entries. However, this does not always hold true since the noise is nonuniform in certain applications and a model of arbitrary diagonal noise covariance matrix should be adopted. To this end, a simple approach to handling the unknown nonuniform noise problem is proposed. In particular, an iterative procedure is developed to determine the signal subspace and noise covariance matrix. As a consequence, existing subspace-based DOA estimators such as MUSIC can be applied. Furthermore, the proposed method converges within very few iterations, in each of which closed-form estimates of the signal subspace and noise covariance matrix can be achieved. Hence, it is much more computationally attractive than conventional methods which rely on multi-dimensional search. It is shown that the proposed method enjoys good performance, simplicity and low computational cost, which are desirable in practical applications.
Keywords: covariance matrices; direction-of-arrival estimation; iterative methods; search problems; DOA estimation; arbitrary diagonal noise covariance matrix model; direction-of-arrival estimation; iterative procedure; multidimensional search; signal subspace; unknown nonuniform noise handling problem; Direction-of-arrival estimation; Direction-of-arrival (DOA) estimation; nonuniform noise; subspace estimation (ID#: 16-9870)


H. Wang, Q. Song, T. Ma, H. Cao and Y. Sun, “Study on Brain-Computer Interface Based on Mental Tasks,” Cyber Technology in Automation, Control, and Intelligent Systems (CYBER), 2015 IEEE International Conference on, Shenyang, 2015, pp. 841-845. doi: 10.1109/CYBER.2015.7288053
Abstract: In this paper, a novel method was proposed, which could realize brain-computer interface by means of distinguishing two different imaginary tasks of relaxation-meditation and tension-imagination based on Electroencephalogram (EEG) signal. When subjects performed the task of relaxation-meditation or tension-imagination, the output EEG signals of the subjects from the central parieto-occipital region of PZ electrode were recorded by the digital EEG device. By means of drawing Hilbert time-frequency amplitude spectrum and selecting the statistical properties of amplitude within different time-frequency bands as characteristic vector set, then carrying out feature selection based on Fisher distance criterion, choosing former several elements of larger Fisher index to be multidimensional feature vector and at last inputting the eigenvector to Fisher classifier, and so brain-computer interface was realized. The experiment results of 15 volunteers showed that the average of classification correct ratio was 90.3% and the highest was 95%. Due to only one electrode adopted, if some coding way was adopted, the brain-computer interface technology could be more easily used in robot control.
Keywords: brain-computer interfaces; electroencephalography; medical signal processing; statistical analysis; EEG signal; Fisher distance criterion; Fisher index; Hilbert time-frequency amplitude spectrum; PZ electrode; brain computer interface; brain-computer interface technology; central parieto-occipital region; characteristic vector set; different time-frequency bands; digital EEG device; electroencephalogram signal; mental tasks; multidimensional feature vector; relaxation meditation; robot control; statistical properties; tension imagination; Accuracy; Brain-computer interfaces; Electrodes; Electroencephalography; Feature extraction; Time-frequency analysis; Transforms; EEG-based brain-computer interface (BCI); Hilbert-Huang transform; feature extraction; mental task of relaxation-meditation; mental task of tension-imagination; pattern classification (ID#: 16-9871)


G. Dickins, Hanchi Chen and Wen Zhang, “Soundfield Control for Consumer Device Testing,” Signal Processing and Communication Systems (ICSPCS), 2015 9th International Conference on, Cairns, QLD, 2015, pp. 1-5. doi: 10.1109/ICSPCS.2015.7391774
Abstract: This paper covers the theory, measurement and analysis of a constructed reference system to capture and acoustically reconstruct a spatial soundfield for device testing. Using a rigid sphere microphone array, a framework is presented for numeric and visual representation of the multidimensional system performance. This is used to compare the measured acoustic spatial recreation of a practical 30 channel dodecahedral speaker array to that of a theoretically optimal third order system. We consider the excess noise gain introduced to compensate for the imperfect realization. Results show the theoretical impact of the pragmatic dodecahedral speaker geometry is similar in magnitude to the impact of acoustic considerations such as scattering and use of real speakers. Whilst the speaker arrangement is important, in the context of system design, it is not the most critical factor in a cost constrained design. This work provides a contribution towards bridging the gap between academic soundfield theory and the research challenges for a present high impact application.
Keywords: acoustic field; loudspeakers; microphone arrays; telecommunication equipment testing; test equipment; channel dodecahedral speaker array; consumer device testing; multidimensional system performance; pragmatic dodecahedral speaker geometry; rigid sphere microphone array; soundfield control; spatial soundfield acoustic reconstruction; spatial soundfield capture; Arrays; Couplings; Geometry; Harmonic analysis; Loudspeakers; Microphones; Testing; acoustic testing; microphone array; sound field; spatial sound; speaker array (ID#: 16-9872)


M. Altuve, E. Severeyn and S. Wong, “Unsupervised Subjects Classification Using Insulin and Glucose Data for Insulin Resistance Assessment,” Signal Processing, Images and Computer Vision (STSIVA), 2015 20th Symposium on, Bogota, 2015, pp. 1-7. doi: 10.1109/STSIVA.2015.7330444
Abstract: In this paper, the K-means clustering algorithm is employed to perform an unsupervised classification of subjects based on unidimensional observations (HOMA-IR and the Matsuda indexes separately) and multidimensional observations (insulin and glucose samples obtained from the oral glucose tolerance test). The goal is to explore if the clusters obtained could be used to predict or diagnose insulin resistance or are related to the profiles of the population under study: metabolic syndrome, marathoners and sedentaries. Using two and three clusters, three classification experiments were carried out: i) using the HOMA-IR index as unidimensional observations, (ii) using the Matsuda index as unidimensional observations, and (iii) using five insulin and five glucose samples as multidimensional observations. The results show that using the HOMA-IR index the clusters are related to insulin resistance but when multidimensional observations are used in the classification process the clusters could be used to predict the insulin resistance or other related diseases.
Keywords: diseases; medical computing; pattern classification; pattern clustering; unsupervised learning; HOMA-IR index; K-means clustering algorithm; Matsuda index; disease; glucose data; insulin data; insulin resistance assessment; unsupervised subject classification; Clustering algorithms; Diseases; Immune system; Indexes; Insulin; Sugar; Unsupervised learning (ID#: 16-9873)


R. Feld and E. C. Slob, “2D GPR Monitoring Without a Source by Interferometry in a 3D World,” Advanced Ground Penetrating Radar (IWAGPR), 2015 8th International Workshop on, Florence, 2015, pp. 1-4. doi: 10.1109/IWAGPR.2015.7292612
Abstract: Creating virtual sources at locations where physical receivers have measured a response is known as seismic interferometry. The method does not use any information about the actual source's location. The source can be mobile phone radiation, already available in the air, as long as this background radiation can be represented by uncorrelated noise sources. Interferometry by multi-dimensional deconvolution (MDD) 'divides the common path out of the data', resulting in amplitude and phase information. Meanwhile, interferometry by cross-correlation (CC) uses time-reversion to retrieve phase information only. CC works fine for low-dissipative media. A finite difference time-domain solver can create 3D line-array data of receiving antennas on a surface of which the subsurface is homogeneous perpendicular to this receiver-array, without having anything being transmitted, other than background radiation. By applying the MDD and CC techniques, the 2D GPR signal can be retrieved as if there would be a transmitting antenna at a receiving antenna's position. Numerical results show that both MDD and CC work well.
Keywords: array signal processing; finite difference time-domain analysis; ground penetrating radar; radar interferometry; radar signal processing; 2D GPR monitoring; 3D world; MDD; finite difference time-domain solver; low-dissipative media; mobile phone radiation; multidimensional deconvolution; receiving antennas; seismic interferometry; transmitting antenna; Deconvolution; Ground penetrating radar; Interferometry; Noise; Receiving antennas; Three-dimensional displays; GPR; cross-correlation; multi-dimensional deconvolution; passive interferometry (ID#: 16-9874)


C. L. Liu and P. P. Vaidyanathan, “Tensor MUSIC in Multidimensional Sparse Arrays,” 2015 49th Asilomar Conference on Signals, Systems and Computers, Pacific Grove, CA, 2015, pp. 1783-1787. doi: 10.1109/ACSSC.2015.7421458
Abstract: Tensor-based MUSIC algorithms have been successfully applied to parameter estimation in array processing. In this paper, we apply these for sparse arrays, such as nested arrays and coprime arrays, which are known to boost the degrees of freedom to O(N2) given O(N) sensors. We consider two tensor decomposition methods: CANDECOMP/PARAFAC (CP) and high-order singular value decomposition (HOSVD) to derive novel tensor MUSIC spectra for sparse arrays. It will be demonstrated that the tensor MUSIC spectrum via HOSVD suffers from cross-term issues while the tensor MUSIC spectrum via CP identifies sources unambiguously, even in high- dimensional tensors.
Keywords: array signal processing; parameter estimation; tensors; array processing; coprime arrays; high-order singular value decomposition; multidimensional sparse arrays; nested arrays; parameter estimation; tensor MUSIC; Covariance matrices; Multiple signal classification; Sensor arrays; Smoothing methods; Tensile stress; CANDE-COMP/PARAFAC (CP); MUSIC algorithm; Sparse arrays; high-order singular value decomposition (HOSVD) (ID#: 16-9875)


S. Miah et al., “Design of Multidimensional Sensor Fusion System for Road Pavement Inspection,” 2015 International Conference on Systems, Signals and Image Processing (IWSSIP), London, 2015, pp. 304-308. doi: 10.1109/IWSSIP.2015.7314236
Abstract: This paper presents a systematic approach for decision level sensor fusion of road pavement inspection system under FP7 RPB HealTec- “Road Pavements & Bridge Deck Health Monitoring/Early Warning Using Advanced Inspection Technologies”. The paper focuses on the design aspect of the post processing sensor fusion system and outlines methods that can be used to process and fuse sensor data such as GPR, IRT, ACU and HDV for multidimensional assessment on the road pavement quality condition. In addition, the paper illustrates a visualization technique for mapping of detected defects with road surface and a GIS map.
Keywords: ground penetrating radar; infrared imaging; radar detection; roads; sensor fusion; ultrasonics; ACU; FP7 RPB HealTec; GIS map; GPR; HDV; IRT; decision level sensor fusion; detected defects; multidimensional assessment; post processing sensor fusion system; road pavement inspection system; road pavement quality condition; road surface; Europe; Feature extraction; Ground penetrating radar; Inspection; Roads; Sensor fusion; Surface treatment; Air-Coupled Ultrasound; Ground Penetrating Radar; Infrared Thermography; Non-destructive testing (ID#: 16-9876)


T. Janvars and P. Farkaš, “Hard Decision Decoding of Single Parity Turbo Product Code with N-Level Quantization,” Telecommunications and Signal Processing (TSP), 2015 38th International Conference on, Prague, 2015, pp. 1-6. doi: 10.1109/TSP.2015.7296433
Abstract: In this paper we propose an iterative hard decision decoding algorithm with N-level quantization for multidimensional turbo product codes composed of single parity codes. The paper introduces the idea of adjusting original iterative HIHO decoding algorithm to keep the same decoder complexity, but approaching SISO decoder performance. Performance for single parity product codes and various quantization levels in an additive white Gaussian noise channel for different single parity turbo codes is presented.
Keywords: AWGN channels; decoding; parity check codes; product codes; quantisation (signal); turbo codes; N-level quantization; additive white Gaussian noise channel; iterative hard decision decoding algorithm; multidimensional turbo product codes; single parity codes; single parity turbo product code; Bit error rate; Decoding; Encoding; Iterative decoding; Product codes; Quantization (signal); HIHO decoder; N-level quantization decision; code component; iterative decoding; performance; single parity turbo product code (ID#: 16-9877)


S. Ling and Q. Yunfeng, “Optimization of the Distributed K-Means Clustering Algorithm Based on Set Pair Analysis,” 2015 8th International Congress on Image and Signal Processing (CISP), Shenyang, 2015, pp. 1593-1598. doi: 10.1109/CISP.2015.7408139
Abstract: The distributed K-means cluster algorithm which focused on multidimensional data has been widely used. However, the current distributed K-means clustering algorithm uses the Euclidean distance as the similarity degree comparison of multidimensional data, which makes the algorithm divides the data set relatively stiff. Aiming at this problem, we present a distributed k-means clustering algorithm (SPAB-DKMC) based on the method of set pair analysis. The results of experiments on the Hadoop distributed platform show that SPAB-DKMC can reduce the number of iterations and improve the efficiency of the distributed K-means clustering algorithm.
Keywords: data handling; optimisation; parallel programming; pattern clustering; Euclidean distance; Hadoop distributed platform; SPAB-DKMC; distributed K-means cluster algorithm; iterative method; set pair analysis; Algorithm design and analysis; Classification algorithms; Clustering algorithms; Convergence; Distributed databases; Euclidean distance; Signal processing algorithms; K-means clustering; MapReduce model; distributed algorithm; similarity degree computation (ID#: 16-9878)


H. S. Shekhawat and S. Weiland, “A Novel Computational Scheme for Low Multi-Linear Rank Approximations of Tensors,” Control Conference (ECC), 2015 European, Linz, 2015, pp. 3003-3008. doi: 10.1109/ECC.2015.7330994
Abstract: Multi-linear functions are generally known as tensors and provide a natural object of study in multi-dimensional signal and system analysis. Tensor approximation has various applications in signal processing and system theory. In this paper, we show the local convergence of a numerical method for multi-linear rank tensor approximation that is based on Jacobi iterations.
Keywords: Jacobian matrices; approximation theory; tensors; Jacobi iteration; multidimensional signal; multilinear functional; multilinear rank approximation; tensor approximation; Approximation methods; Convergence; Eigenvalues and eigenfunctions; Jacobian matrices; Standards; Tensile stress; Jacobi iterations; Tensor decompositions; singular value decompositions (ID#: 16-9879)


J. A. Hogan and J. D. Lakey, “Wavelet Frames Generated by Bandpass Prolate Functions,” Sampling Theory and Applications (SampTA), 2015 International Conference on, Washington, DC, 2015, pp. 120-123. doi: 10.1109/SAMPTA.2015.7148863
Abstract: We refer to eigenfunctions of the kernel corresponding to truncation in a time interval followed by truncation in a frequency band as bandpass prelates (BPPs). We prove frame bounds for certain families of shifts of bandpass prolates, and we numerically construct dual frames for finite dimensional analogues. In the continuous case, the corresponding families produce wavelet frames for the space of square-integrable functions.
Keywords: eigenvalues and eigenfunctions; multidimensional systems; signal processing; wavelet transforms; BPP; bandpass prolate functions; bandpass prolates; eigenfunctions; finite dimensional analogues; frequency band; square-integrable functions; wavelet frames; Baseband; Discrete Fourier transforms; Eigenvalues and eigenfunctions; Generators; Kernel; Redundancy; Wave functions (ID#: 16-9880)


Senthildevi, K. A and Chandra, E, “Keyword Spotting System for Tamil Isolated Words Using Multidimensional MFCC and DTW Algorithm,” Communications and Signal Processing (ICCSP), 2015 International Conference on, Melmaruvathur, 2015, pp. 0550-0554. doi: 10.1109/ICCSP.2015.7322545
Abstract: Audio mining is a speaker independent speech processing technique and is related to data mining. Keyword spotting plays an important role in audio mining. Keyword spotting is retrieval of all instances of a given keyword in spoken utterances. It is well suited to data mining tasks that process large amount of speech such as telephone routing and to audio document indexing. Feature extraction is the first step for all speech processing tasks. This Paper presents an approach for keyword spotting in isolated Tamil utterances using Multidimensional Mel Frequency Cepstral Coefficient feature vectors and DTW algorithm. The accuracy of keyword spotting is measured with 12D, 26D and 39D MFCC feature vectors for month names in Tamil language and the performances of the mu1tidimensional MFCCs are compared. The code is developed in the MATLAB environment and performs the identification satisfactorily.
Keywords: data mining; feature extraction; indexing; speaker recognition; 12D MFCC feature vector; 26D MFCC feature vector; 39D MFCC feature vector; DTW algorithm; TLAB environ; Tamil isolated word; audio document indexing; audio mining; isolated Tamil utterance; keyword spotting system; multidimensional MFCC; multidimensional Mel frequency cepstral coefficient feature vector; speaker independent speech processing technique; Accuracy; Algorithm design and analysis; Frequency conversion; Indexes; Mel frequency cepstral coefficient; Pattern matching; Audio mining; Keyword spotting; MFCC Feature vectors; Speech processing
(ID#: 16-9881)


N. Udayanga, A. Madanayake and C. Wijenayake, “FPGA-Based Network-Resonance Applebaum Adaptive Arrays for Directional Spectrum Sensing,” 2015 IEEE 58th International Midwest Symposium on Circuits and Systems (MWSCAS), Fort Collins, CO, 2015, pp. 1-4. doi: 10.1109/MWSCAS.2015.7282165
Abstract: Cognitive radio (CR) depends on the accurate detection of frequency, modulation, and direction pertaining to radio sources, in turn, leading to spatio-temporal directional spectrum sensing. False detections due to high levels of noise and interference may adversely impacts the CR's performance. To address this problem, a novel system architecture that increases the accuracy of directional spectrum sensing in situations with low signal to noise ratio (SNR) is proposed. This work combines adaptive arrays, multidimensional filter theory and cyclostationary feature detection. A linear array Applebaum beamformer is employed in conjunction with a two-dimensional (2-D) planar-resonant beam filter to perform highly directional receive mode wideband beamforming with improved spatial selectivity. A Xilinx Virtex-6 based field programmable gate array (FPGA) prototype of the improved beamforming front-end verifies a clock frequency of 100.9 MHz. The proposed network-resonant Applebaum array provides 6 dB, 5.5 dB and 5 dB noise suppression capability reflected in the spectral correlation function for input SNRs of -20 dB, -25 dB, and -30 dB, respectively, for an RF beam direction 50° degrees from array broadside.
Keywords: array signal processing; cognitive radio; correlation methods; feature extraction; field programmable gate arrays; filtering theory; interference suppression; modulation; radio spectrum management; radiofrequency interference; signal detection; 2D planar-resonant beam filter; FPGA; RF beam direction; SNR; Xilinx Virtex-6; clock frequency; cyclostationary feature detection; directional receive mode wideband beamforming; field programmable gate array; frequency 100.9 MHz; linear array Applebaum beamformer; multidimensional filter theory; network-resonance Applebaum adaptive arrays; network-resonant Applebaum array; noise suppression capability; radio sources; signal to noise ratio; spatial selectivity; spatiotemporal directional spectrum sensing; spectral correlation function; Adaptive arrays; Feature extraction; Frequency modulation; Interference; Sensors; Signal to noise ratio (ID#: 16-9882)


Y. Zhu, A. Jiang, H. K. Kwan and K. He, “Distributed Sensor Network Localization Using Combination and Diffusion Scheme,” 2015 IEEE International Conference on Digital Signal Processing (DSP), Singapore, 2015, pp. 1156-1160. doi: 10.1109/ICDSP.2015.7252061
Abstract: A distributed sensor network localization algorithm is presented in this paper. During the localization procedure, each sensor estimates its own coordinate using a local multidimensional scaling (MDS) algorithm. In contrast to the classical MDS algorithm adopting the centralized processing, all the neighbors of each sensor are considered as anchor nodes in the local MDS algorithm. Furthermore, each sensor's coordinate could be estimated by its neighbors based on their respective knowledge. These local estimates are then collected by corresponding sensors and used in a combination step to finally determine sensors' coordinates. In this way, each sensor's knowledge could be diffused and shared in the network. Simulation results show that, compared to the classical MDS algorithm, the proposed algorithm is more robust to measurement noise. Moreover, when sensors are sparsely connected, the distributed MDS algorithm performs better than the centralized version of the MDS algorithm.
Keywords: diffusion; wireless sensor networks; anchor nodes; centralized processing; combination scheme; coordinate estimation; diffusion scheme; distributed MDS algorithm; distributed sensor network localization algorithm; local MDS algorithm; local multidimensional scaling algorithm; measurement noise; sensor coordinate determination; Ad hoc networks; Distance measurement; Electronic mail; Nickel; Noise; Optimization; Wireless sensor networks; Diffusion scheme; distributed localization; multidimensional scaling (MDS) algorithm; sensor network (ID#: 16-9883)


W. W. Wang and F. M. Guo, “Simulation of InAlAs/InGaAs/InAs Quantum Dots — Quantum Well Near-Infrared Detector,” 2015 International Conference on Numerical Simulation of Optoelectronic Devices (NUSOD), Taipei, 2015, pp. 101-102. doi: 10.1109/NUSOD.2015.7292842
Abstract: We systematically have studied the InAlAs/InGaAs/InAs quantum dots - quantum well with InP substrate by simulating and analyzed with Crosslight Apsys package. The S (signal)/D (dark current) has best working points at 3.5V and -1.3V at 300K and photocurrent spectrum based on quantum dot in well can tail up to 1.70μm. Simulation result still included InGaAs EL spectrum, dark current and photo-responsivity.
Keywords: III-V semiconductors; aluminium compounds; dark conductivity; electroluminescence; gallium arsenide; indium compounds; infrared detectors; photoconductivity; photodetectors; quantum well devices; semiconductor quantum dots; semiconductor quantum wells; Crosslight Apsys package; InAlAs-InGaAs-InAs; InP; InP substrate; dark current; electroluminescence spectrum; photocurrent spectrum; photoresponsivity; quantum dot-quantum well near-infrared detector; temperature 300 K; voltage -1.3 V; voltage 3.5 V; Atmospheric modeling; Indium gallium arsenide; Photoconductivity; Quantum dots; Resonant tunneling devices; Signal to noise ratio (ID#: 16-9884)


H. Feng and B. Z. Guo, “Distributed Disturbance Estimator and Application to Stabilization of Multi-Dimensional Kirchhoff Equation,” 2015 54th IEEE Conference on Decision and Control (CDC), Osaka, 2015, pp. 2501-2506. doi: 10.1109/CDC.2015.7402584
Abstract: In this paper, we present a linear disturbance estimator with time-varying gain to extract the real signal from the corrupted velocity signal. The approach comes from the active disturbance rejection control. A variant form of the estimator can also be served as a tracking differentiator. The estimator itself is relatively independent to the control plants. The result is applied to stabilization for a multi-dimensional Kirchhoff equation as a demonstration.
Keywords: active disturbance rejection control; parameter estimation; signal processing; ADRC; distributed disturbance estimator; linear disturbance estimator; multidimensional Kirchhoff equation; stabilization; time-varying gain; velocity signal extraction; Convergence; Distributed parameter systems; Hilbert space; Numerical simulation; Observers; Robust control; Uncertainty
(ID#: 16-9885)


S. K. Mahto, A. Choubey and S. Suman, “Linear Array Synthesis with Minimum Side Lobe Level and Null Control Using Wind Driven Optimization,” Signal Processing And Communication Engineering Systems (SPACES), 2015 International Conference on, Guntur, 2015, pp. 191-195. doi: 10.1109/SPACES.2015.7058246
Abstract: This paper presents synthesis of unequally spaced linear array antenna with minimum sidelobe suppression, desired beamwidth and null control using wind driven optimization (WDO) algorithm. The WDO technique is nature-inspired, population based iterative heuristic global optimization algorithm for multidimensional and multimodal problems. The array synthesis objective function is formulated and then optimizes elements location using WDO algorithm to achieve the goal of minimum sidelobe level (SLL) suppression, desired beamwidth and null placement in certain direction. The results of the WDO algorithm are validated by comparing with results obtained using PSO and other evolutionary algorithm as reported in literature for linear array (N=10). The synthesis results such as radiation pattern and convergence graph show that WDO algorithm performs far better than the common PSO, CLPSO and other evolutionary algorithms.
Keywords: antenna radiation patterns; evolutionary computation; iterative methods; linear antenna arrays; particle swarm optimisation; PSO; SLL suppression; WDO algorithm; convergence graph show; evolutionary algorithm; iterative heuristic global optimization algorithm; linear array synthesis; minimum sidelobe level suppression; multimodal problems; null control; particle swarm optimization; radiation pattern; wind driven optimization; Algorithm design and analysis; Arrays; Electromagnetics; Linear antenna arrays; Optimization; Particle swarm optimization; Antenna array; comprehensive learning particle swarm optimization (CLPSO); evolutionary programming; interference; linear array design; particle swarm optimization (PSO); sidelobe level suppression (SLL); wind driven optimization (WDO) (ID#: 16-9886)


M. Cheng, Y. Wu and Y. Chen, “Capacity Analysis for Non-Orthogonal Overloading Transmissions Under Constellation Constraints,” Wireless Communications & Signal Processing (WCSP), 2015 International Conference on, Nanjing, 2015, pp. 1-5. doi: 10.1109/WCSP.2015.7341294
Abstract: In this work, constellation constrained (CC) capacities of a series of non-orthogonal overloading transmission schemes are derived in AWGN channels. All these schemes follow a similar transmission structure, in which modulated symbols are spread on to a group of resource elements (REs) in a sparse manner, i.e., only a part of the REs have nonzero components while the others are filled with zeros. The multiple access schemes follow this structure is called sparse code multiple access (SCMA) in general. In particular, a complete SCMA scheme would combine multi-dimensional modulation and the low density spreading (LDS) together such that the symbols from the same data layer on different REs are different but dependent. If the spread symbols are the same, it is a simplified implementation of SCMA and is called LDS. Furthermore, depending on whether the numbers of non-zero components for each data layer are equal or not, there are regular LDS (LDS in short) and irregular LDS (IrLDS), respectively. The paper would show from theoretical derivation and simulation results that the complete SCMA schemes outperform the simplified version LDS/IrLDS. Moreover, we also show that the application of phase rotation in the modulator can significantly boost the link performance of such non-orthogonal multiple access schemes.
Keywords: AWGN channels; code division multiple access; AWGN channel; SCMA scheme; constellation constraint; low density spreading; multidimensional modulation; nonorthogonal multiple access scheme link performance; nonorthogonal overloading transmission capacity analysis; resource element; sparse code multiple access scheme; 5G mobile communication; Modulation; Multiaccess communication; Nickel; Simulation; Sparse matrices; Constellation Constrained capacity; IrLDS; LDS; Non-orthogonal multiple access; SCMA (ID#: 16-9887)


Y. Yiru, G. Yinghui and X. Jianyu, “Auto-Encoder Based Modeling of Combustion System for Circulating Fluidized Bed Boiler,” Signal Processing, Communications and Computing (ICSPCC), 2015 IEEE International Conference on, Ningbo, 2015, pp. 1-4. doi: 10.1109/ICSPCC.2015.7338946
Abstract: Deep learning attract the interests of many researchers. Multidimensional algorithms require large data storage space. This paper proposes a modeling of the combustion system used for Circulating Fluidized Bed Boiler (CFBB), which is based on the method of auto-encoder of deep learning. The 20 dimensional input samples set is the input layer, and then the units of hidden layer are calculated. The data dimension is reduced through the auto-encoder, further, these data are as input of the RBF network. The modeling is carried out by the Radical Basis Function (RBF) neutral network. Compared with traditional methods, the auto-encoder is suitable for modeling. The samples are greatly reduced for the subsequent work. Numerical results provided in this paper validate the proposed model and method, as well as the validity of the conversion from the auto-encoder strategy.
Keywords: boilers; combustion; fluidised beds; radial basis function networks; CFBB; auto-encoder based modeling; circulating fluidized bed boiler; combustion system; data dimension; radial basis function neutral network; Combustion; Computational modeling; Data models; Mathematical model; Neural networks; Testing; Training; Circulating fluidized bed boiler (CFBB); auto-encoders; combustion system; modeling (ID#: 16-9888)


P. P. Vaidyanathan, “Multidimensional Ramanujan-sum Expansions on Nonseparable Lattices,” 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), South Brisbane, QLD, 2015, pp. 3666-3670. doi: 10.1109/ICASSP.2015.7178655
Abstract: It is well-known that the Ramanujan-sum cq(n) has applications in the analysis of periodicity in sequences. Recently the author developed a new type of Ramanujan-sum representation especially suited for finite duration sequences x(n): This is based on decomposing x(n) into a sum of signals belonging to so-called Ramanujan subspaces Sqi. This offers an efficient way to identify periodic components using integer computations and projections, since cq(n) is integer valued. This paper revisits multidimensional signals with periodicity on possibly nonseparable integer lattices. Multidimensional Ramanujan-sum and Ramanujan-subspaces are developed for this case. A Ramanujan-sum based expansion for multidimensional signals is then proposed, which is useful to identify periodic components on nonseparable lattices.
Keywords: signal representation; finite duration sequences; integer computations; multidimensional Ramanujan-sum expansions; multidimensional signals; nonseparable lattices; Dictionaries; Discrete Fourier transforms; Finite impulse response filters; Lattices; Matrix decomposition; Tensile stress; Ramanujan-sum on lattices; integer basis; periodic subspaces; periodicity lattices (ID#: 16-9889)


Rui Zeng, Jiasong Wu, L. Senhadji and Huazhong Shu, “Tensor Object Classification via Multilinear Discriminant Analysis Network,” 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), South Brisbane, QLD, 2015, pp. 1971-1975. doi: 10.1109/ICASSP.2015.7178315
Abstract: This paper proposes an multilinear discriminant analysis network (MLDANet) for the recognition of multidimensional objects, knows as tensor objects. The MLDANet is a variation of linear discriminant analysis network (LDANet) and principal component analysis network (PCANet), both of which are the recently proposed deep learning algorithms. The MLDANet consists of three parts: (1) The encoder learned by MLDA from tensor data. (2) Features maps obtained from decoder. (3) The use of binary hashing and histogram for feature pooling. A learning algorithm for MLDANet is described. Evaluations on UCF11 database indicate that the proposed MLDANet outperforms the PCANet, LDANet, MPCA+LDA, and MLDA in terms of classification for tensor objects.
Keywords: feature extraction; image classification; image coding; learning (artificial intelligence); object recognition; principal component analysis; tensors; LDANet; MLDANet; PCANet; UCF11 database; binary hashing; binary histogram; deep learning algorithms; feature pooling; features maps; linear discriminant analysis network; multidimensional object recognition; multilinear discriminant analysis network; principal component analysis network; tensor object classification; Erbium; Deep learning; tensor object classification (ID#: 16-9890)


S. Smith, N. Ravindran, N. D. Sidiropoulos and G. Karypis, “SPLATT: Efficient and Parallel Sparse Tensor-Matrix Multiplication,” Parallel and Distributed Processing Symposium (IPDPS), 2015 IEEE International, Hyderabad, 2015, pp. 61-70. doi: 10.1109/IPDPS.2015.27
Abstract: Multi-dimensional arrays, or tensors, are increasingly found in fields such as signal processing and recommender systems. Real-world tensors can be enormous in size and often very sparse. There is a need for efficient, high-performance tools capable of processing the massive sparse tensors of today and the future. This paper introduces SPLATT, a C library with shared-memory parallelism for three-mode tensors. SPLATT contains algorithmic improvements over competing state of the art tools for sparse tensor factorization. SPLATT has a fast, parallel method of multiplying a matricide tensor by a Khatri-Rao product, which is a key kernel in tensor factorization methods. SPLATT uses a novel data structure that exploits the sparsity patterns of tensors. This data structure has a small memory footprint similar to competing methods and allows for the computational improvements featured in our work. We also present a method of finding cache-friendly reordering and utilizing them with a novel form of cache tiling. To our knowledge, this is the first work to investigate reordering and cache tiling in this context. SPLATT averages almost 30x speedup compared to our baseline when using 16 threads and reaches over 80x speedup on NELL-2.
Keywords: C language; cache storage; data structures; matrix multiplication; shared memory systems; software libraries; sparse matrices; tensors; C library; Khatri-Rao product; SPLATT; cache tiling; cache-friendly reordering; data structure; matricide tensor multiplication; multidimensional arrays; parallel sparse tensor-matrix multiplication; shared-memory parallelism; sparse tensor factorization; three-mode tensors; Algorithm design and analysis; Context; Data structures; Memory management; Parallel processing; Sparse matrices; Tensile stress; CANDECOMP; CPD; PARAFAC; Sparse tensors; parallel (ID#: 16-9891)


J. L. Jodra, I. Gurrutxaga and J. Muguerza, “A Study of Memory Consumption and Execution Performance of the cuFFT Library,” 2015 10th International Conference on P2P, Parallel, Grid, Cloud and Internet Computing (3PGCIC), Krakow, 2015, pp. 323-327. doi: 10.1109/3PGCIC.2015.66
Abstract: The Fast Fourier Transform (FFT) is an essential primitive that has been applied in various fields of science and engineering. In this paper, we present a study of the Nvidia's cuFFT library — a proprietary FFT implementation for Nvidia's Graphics Processing Units — to identify the impact that two configuration parameters have in its execution. One useful feature of the cuFFT library is that it can be used to efficiently calculate several FFTs at once. In this work we analyse the effect this feature has on memory consumption and execution time in order to find a useful trade-off. Another important feature of the library is that it supports sophisticated input and output data layouts. This feature allows, for instance, to perform multidimensional FFT decomposition with no need of data transpositions. We have identified some patterns which may help to decide the parameters and values that are the key for achieving increased performance in a FFT calculation. We believe that this study will help researchers who wish to use the cuFFT library to decide what parameters values are best suited to achieve higher performance in their execution, both in time and memory consumption.
Keywords: fast Fourier transforms; graphics processing units; libraries; mathematics computing; Nvidia cuFFT library; Nvidia graphics processing units; execution performance; execution time; fast Fourier transform; input data layout; memory consumption; multidimensional FFT decomposition; output data layout; Fast Fourier transforms; Graphics processing units; Layout; Libraries; Memory management; Signal processing algorithms; CUDA; cuFFT (ID#: 16-9892)


S. Javed, T. Bouwmans and S. K. Jung, “Stochastic Decomposition into Low Rank and Sparse Tensor for Robust Background Subtraction,” Imaging for Crime Prevention and Detection (ICDP-15), 6th International Conference on, London, 2015, pp. 1-6. doi: 10.1049/ic.2015.0105
Abstract: Background subtraction (BS) is a very important task for various computer vision applications. Higher-Order Robust Principal Component Analysis (HORPCA) based robust tensor recovery or decomposition provides a very nice potential for BS. The BG sequence is then modeled by underlying low-dimensional subspace called low-rank while the sparse tensor constitutes the foreground (FG) mask. However, traditional tensor based decomposition methods are sensitive to outliers and due to the batch optimization methods, high dimensional data should be processed. As a result, huge memory usage and computational issues arise in earlier approaches which are not desirable for real-time systems. In order to tackle these challenges, we apply the idea of stochastic optimization on tensor for robust low-rank and sparse error separation. Only one sample per time instance is processed from each unfolding matrices of tensor in our scheme to separate the low-rank and sparse component and update the low dimensional basis when a new sample is revealed. This iterative multi-dimensional tensor data optimization scheme for decomposition is independent of the number of samples and hence it reduces the memory and computational complexities. Experimental evaluations on both synthetic and real-world datasets demonstrate the robustness and comparative performance of our approach as compared to its batch counterpart without sacrificing the online processing.
Keywords: computational complexity; computer vision; image sequences; iterative methods; optimisation; principal component analysis; stochastic processes; tensors; video signal processing; BG sequence; HORPCA based robust tensor decomposition; HORPCA based robust tensor recovery; batch optimization methods; computational complexity reduction; computer vision applications; foreground mask; high dimensional data; higher-order robust principal component analysis; iterative multidimensional tensor data optimization scheme; low rank tensor; memory complexity reduction; robust background subtraction; robust low-rank error separation; sparse error separation; sparse tensor; stochastic decomposition; stochastic optimization; video analysis; Background/Foreground Separation; Low-rank tensor; Stochastic optimization; Tensor decomposition (ID#: 16-9893)


Sangeetha P., Karthik M. and Kalavathi Devi T., “VLSI Architectures for the 4-Tap and 6-Tap 2-D Daubechies Wavelet Filters Using Pipelined Direct Mapping Method,” Innovations in Information, Embedded and Communication Systems (ICIIECS), 2015 International Conference on, Coimbatore, 2015, pp. 1-6. doi: 10.1109/ICIIECS.2015.7193010
Abstract: This paper presents simple design of multilevel two dimensional (2D) Daubechies wavelet transform with pipelined direct mapping method for image compression. Daubechies 4-tap (Daub4) selected with pipelined direct mapping technique. Due to separability property of the multi-dimensional Daubechies, the architecture has been implemented using a cascade of two N-point one-dimensional (1-D) Daub4 and Daub6. The 2-dimensinal discrete wavelet transform lifting scheme algorithm has been implemented using MATLAB program for both modules forward daubechies wavelet transform (FDWT) and inverse daubechies wavelet transform (IDWT) to determine the peak signal to noise ratio (PSNR) and correlation for the retrieved image.
Keywords: VLSI; data compression; digital signal processing chips; discrete wavelet transforms; image coding; image filtering; image retrieval; medical image processing; pipeline arithmetic; 2D Daubechies wavelet filters; 2D Daubechies wavelet transform; 2D discrete wavelet transform lifting scheme algorithm;  FDWT; IDWT; MATLAB program; PSNR; VLSI architectures; forward Daubechies wavelet transform; image compression; inverse Daubechies wavelet transform; multidimensional Daubechies; peak signal to noise ratio; pipelined direct mapping method; Biomedical imaging; Computer architecture; Conferences; Discrete wavelet transforms; Image coding; Daubechies wavelet filter; MATLAB (ID#: 16-9894)


C. Anagnostopoulos and P. Triantafillou, “Learning Set Cardinality in Distance Nearest Neighbours,” Data Mining (ICDM), 2015 IEEE International Conference on, Atlantic City, NJ, 2015, pp. 691-696. doi: 10.1109/ICDM.2015.17
Abstract: Distance-based nearest neighbours (dNN) queries and aggregations over their answer sets are important for exploratory data analytics. We focus on the Set Cardinality Prediction (SCP) problem for the answer set of dNN queries. We contribute a novel, query-driven perspective for this problem, whereby answers to previous dNN queries are used to learn the answers to incoming dNN queries. The proposed novel machine learning (ML) model learns the dynamically changing query patterns space and thus it can focus only on the portion of the data being queried. The model enjoys several comparative advantages in prediction error and space requirements. This is in addition to being applicable in environments with sensitive data and/or environments where data accesses are too costly to execute, where the data-centric state-of-the-art is inapplicable and/or too costly. A comprehensive performance evaluation of our model is conducted, evaluating its comparative advantages versus acclaimed methods (i.e., different self-tuning histograms, sampling, multidimensional histograms, and the power-method).
Keywords: data analysis; learning (artificial intelligence); query processing; ML model; SCP problem; dNN query; data access; distance-based nearest neighbour query; exploratory data analytics; learning set cardinality; machine learning model; query pattern space; set cardinality prediction problem; Adaptation models; Estimation; Histograms; Prototypes; Quantization (signal); Solid modeling; Yttrium; Query-driven set cardinality prediction; distance nearest neighbors analytics; hetero-associative competitive learning; local regression vector quantization (ID#: 16-9895)


U. Arora and N. Sukavanam, “Approximate Controllability of a Second Order Delayed Semilinear Stochastic System with Nonlocal Conditions,” Signal Processing, Computing and Control (ISPCC), 2015 International Conference on, Waknaghat, 2015, pp. 230-235. doi: 10.1109/ISPCC.2015.7375031
Abstract: In this paper, the approximate controllability of a second order delayed semilinear stochastic system with nonlocal conditions has been discussed. The control function for this system has been established with the help of infinite dimensional controllability operator. Using this control function, the sufficient conditions for the approximate controllability of the proposed system have been obtained using Sadovskii's Fixed Point theorem.
Keywords: controllability; delay systems; multidimensional systems; stochastic systems; Sadovskii fixed point theorem; approximate controllability; infinite dimensional controllability operator; nonlocal conditions; second order delayed semilinear stochastic system; sufficient conditions; Aerospace electronics; Controllability; Generators; Hilbert space; Stochastic systems; Yttrium; Approximate Controllability; Delayed System; Sadovskii's Fixed Point Theorem; Semilinear Systems; Stochastic Control System (ID#: 16-9896)



Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.