How to generate semantically meaningful and structurally sound adversarial examples? We propose to answer this question by restricting the search for adversaries in the true data manifold. To this end, we introduce a stochastic variational inference method to learn the data manifold, in the presence of continuous latent variables with intractable posterior distributions, without requiring an a priori form for the data underlying distribution. We then propose a manifold perturbation strategy that ensures the cases we perturb remain in the manifold of the original examples and thereby generate the adversaries. We evaluate our approach on a number of image and text datasets. Our results show the effectiveness of our approach in producing coherent, and realistic-looking adversaries that can evade strong defenses known to be resilient to traditional adversarial attacks

点赞 0
阅读1+

We are interested in the decomposition of motion data into a sparse linear combination of base functions which enable efficient data processing. We combine two prominent frameworks: dynamic time warping (DTW), which offers particularly successful pairwise motion data comparison, and sparse coding (SC), which enables an automatic decomposition of vectorial data into a sparse linear combination of base vectors. We enhance SC as follows: an efficient kernelization which extends its application domain to general similarity data such as offered by DTW, and its restriction to non-negative linear representations of signals and base vectors in order to guarantee a meaningful dictionary. Empirical evaluations on motion capture benchmarks show the effectiveness of our framework regarding interpretation and discrimination concerns.

点赞 0
阅读1+

The Expectation-Maximization algorithm is perhaps the most broadly used algorithm for inference of latent variable problems. A theoretical understanding of its performance, however, largely remains lacking. Recent results established that EM enjoys global convergence for Gaussian Mixture Models. For Mixed Linear Regression, however, only local convergence results have been established, and those only for the high SNR regime. We show here that EM converges for mixed linear regression with two components (it is known that it may fail to converge for three or more), and moreover that this convergence holds for random initialization. Our analysis reveals that EM exhibits very different behavior in Mixed Linear Regression from its behavior in Gaussian Mixture Models, and hence our proofs require the development of several new ideas.

点赞 0
阅读1+

Recent literature has shown that symbolic data, such as text and graphs, is often better represented by points on a curved manifold, rather than in Euclidean space. However, geometrical operations on manifolds are generally more complicated than in Euclidean space, and thus many techniques for processing and analysis taken for granted in Euclidean space are difficult on manifolds. A priori, it is not obvious how we may generalize such methods to manifolds. We consider specifically the problem of distance metric learning, and present a framework that solves it on a large class of manifolds, such that similar data are located in closer proximity with respect to the manifold distance function. In particular, we extend the existing metric learning algorithms, and derive the corresponding sample complexity rates for the case of manifolds. Additionally, we demonstrate an improvement of performance in $k$-means clustering and $k$-nearest neighbor classification on real-world complex networks using our methods.

点赞 0
阅读1+

We investigate non-negative least squares (NNLS) for the recovery of sparse non-negative vectors from noisy linear and biased measurements. We build upon recent results from [1] showing that for matrices whose row-span intersects the positive orthant, the nullspace property (NSP) implies compressed sensing recovery guarantees for NNLS. Such results are as good as for l_1-regularized estimators but require no tuning at all. A bias in the sensing matrix improves this auto-regularization feature of NNLS and the NSP then determines the sparse recovery performance only. We show that NSP holds with high probability for shifted symmetric subgaussian matrices and its quality is independent of the bias. As tool for proving this result we established a debiased version of Mendelson's small ball method.

点赞 0
阅读1+

Dictionary learning and component analysis models are fundamental for learning compact representations that are relevant to a given task (feature extraction, dimensionality reduction, denoising, etc.). The model complexity is encoded by means of specific structure, such as sparsity, low-rankness, or nonnegativity. Unfortunately, approaches like K-SVD - that learn dictionaries for sparse coding via Singular Value Decomposition (SVD) - are hard to scale to high-volume and high-dimensional visual data, and fragile in the presence of outliers. Conversely, robust component analysis methods such as the Robust Principal Component Analysis (RPCA) are able to recover low-complexity (e.g., low-rank) representations from data corrupted with noise of unknown magnitude and support, but do not provide a dictionary that respects the structure of the data (e.g., images), and also involve expensive computations. In this paper, we propose a novel Kronecker-decomposable component analysis model, coined as Robust Kronecker Component Analysis (RKCA), that combines ideas from sparse dictionary learning and robust component analysis. RKCA has several appealing properties, including robustness to gross corruption; it can be used for low-rank modeling, and leverages separability to solve significantly smaller problems. We design an efficient learning algorithm by drawing links with a restricted form of tensor factorization, and analyze its optimality and low-rankness properties. The effectiveness of the proposed approach is demonstrated on real-world applications, namely background subtraction and image denoising and completion, by performing a thorough comparison with the current state of the art.

点赞 0
阅读1+

The paper discusses a series of results concerning reproducing kernel Hilbert spaces, related to the factorization of their kernels. In particular, it is proved that for a large class of spaces isometric multipliers are trivial. One also gives for certain spaces conditions for obtaining a particular type of dilation, as well as a classification of Brehmer type submodules.

点赞 0
阅读1+

It is proven that encoding images and videos through Symmetric Positive Definite (SPD) matrices, and considering the Riemannian geometry of the resulting space, can lead to increased classification performance. Taking into account manifold geometry is typically done via embedding the manifolds in tangent spaces, or Reproducing Kernel Hilbert Spaces (RKHS). Recently, it was shown that embedding such manifolds into a Random Projection Spaces (RPS), rather than RKHS or tangent space, leads to higher classification and clustering performance. However, based on structure and dimensionality of the randomly generated hyperplanes, the classification performance over RPS may vary significantly. In addition, fine-tuning RPS is data expensive (as it requires validation-data), time consuming, and resource demanding. In this paper, we introduce an approach to learn an optimized kernel-based projection (with fixed dimensionality), by employing the concept of subspace clustering. As such, we encode the association of data points to the underlying subspace of each point, to generate meaningful hyperplanes. Further, we adopt the concept of dictionary learning and sparse coding, and discriminative analysis, for the optimized kernel-based projection space (OPS) on SPD manifolds. We validate our algorithm on several classification tasks. The experiment results also demonstrate that the proposed method outperforms state-of-the-art methods on such manifolds.

点赞 0
阅读1+

Data encoded as symmetric positive definite (SPD) matrices frequently arise in many areas of computer vision and machine learning. While these matrices form an open subset of the Euclidean space of symmetric matrices, viewing them through the lens of non-Euclidean Riemannian geometry often turns out to be better suited in capturing several desirable data properties. However, formulating classical machine learning algorithms within such a geometry is often non-trivial and computationally expensive. Inspired by the great success of dictionary learning and sparse coding for vector-valued data, our goal in this paper is to represent data in the form of SPD matrices as sparse conic combinations of SPD atoms from a learned dictionary via a Riemannian geometric approach. To that end, we formulate a novel Riemannian optimization objective for dictionary learning and sparse coding in which the representation loss is characterized via the affine invariant Riemannian metric. We also present a computationally simple algorithm for optimizing our model. Experiments on several computer vision datasets demonstrate superior classification and retrieval performance using our approach when compared to sparse coding via alternative non-Riemannian formulations.

点赞 0
阅读1+
Top