Binary tomography is concerned with the recovery of binary images from a few of their projections (i.e., sums of the pixel values along various directions). To reconstruct an image from noisy projection data, one can pose it as a constrained least-squares problem. As the constraints are non-convex, many approaches for solving it rely on either relaxing the constraints or heuristics. In this paper we propose a novel convex formulation, based on the Lagrange dual of the constrained least-squares problem. The resulting problem is a generalized LASSO problem which can be solved efficiently. It is a relaxation in the sense that it can only be guaranteed to give a feasible solution; not necessarily the optimal one. In exhaustive experiments on small images (2x2, 3x3, 4x4) we find, however, that if the problem has a unique solution, our dual approach finds it. In case of multiple solutions, our approach finds the commonalities between the solutions. Further experiments on realistic numerical phantoms and an experiment on X-ray dataset show that our method compares favourably to Total Variation and DART.

0
下载
关闭预览

相关内容

Google 发布的面向结构化 web 应用的开语言。 dartlang.org

We provide a deterministic CONGEST algorithm to constant factor approximate the minimum dominating set on graphs of bounded arboricity in $O(\log n)$ rounds. This improves over the well-known randomized algorithm of Lenzen and Wattenhofer[DISC2010] by making it a deterministic algorithm.

0
0
下载
预览

The clustering methods have been used in a variety of fields such as image processing, data mining, pattern recognition, and statistical analysis. Generally, the clustering algorithms consider all variables equally relevant or not correlated for the clustering task. Nevertheless, in real situations, some variables can be correlated or may be more or less relevant or even irrelevant for this task. This paper proposes partitioning fuzzy clustering algorithms based on Euclidean, City-block and Mahalanobis distances and entropy regularization. These methods are an iterative three steps algorithms which provide a fuzzy partition, a representative for each fuzzy cluster, and the relevance weight of the variables or their correlation by minimizing a suitable objective function. Several experiments on synthetic and real datasets, including its application to noisy image texture segmentation, demonstrate the usefulness of these adaptive clustering methods.

0
0
下载
预览

From ancient to modern times, acoustic structures have been used to control the propagation of acoustic waves. However, the design of the acoustic structures has remained widely a time-consuming and computational resource-consuming iterative process. In recent years, Deep Learning has attracted unprecedented attention for its ability to tackle hard problems with huge datasets, which has achieved state-of-the-art results in various tasks. In this work, an acoustic structure design method is proposed based on deep learning. Taking the design of multi-order Helmholtz resonator for instance, we experimentally demonstrate the effectiveness of the proposed method. Our method is not only able to give a very accurate prediction of the geometry of the acoustic structures with multiple strong-coupling parameters, but also capable of improving the performance of evolutionary approaches in optimization for a desired property. Compared with the conventional numerical methods, our method is more efficient, universal and automatic, which has a wide range of potential applications, such as speech enhancement, sound absorption and insulation.

0
0
下载
预览

Recent progress has been made in semi-supervised learning (SSL) by combining methods that exploit various aspects of the data distribution, e.g. image augmentation and consistency regularisation, rely on properties of $p(x)$, whereas others, such as entropy minimisation and pseudo-labelling, pertain to the sample-specific label distributions $p(y|x)$. Focusing on the latter, we propose a probabilistic model for discriminative SSL that mirrors its classical generative counterpart, filling a gap in existing semi-supervised learning theory. Under this model, several well-known SSL methods can be interpreted as imposing relaxations of an appropriate prior over learned parameters of $p(y|x)$. The same model extends naturally to neuro-symbolic SSL, often treated as a separate field, in which binary label attributes are subject to logical rules. The model thus also theoretically justifies a family of neuro-symbolic SSL methods and unifies them with standard SSL, taking a step towards bridging the divide between statistical learning and logical reasoning.

0
0
下载
预览

We consider the decision-making framework of online convex optimization with a very large number of experts. This setting is ubiquitous in contextual and reinforcement learning problems, where the size of the policy class renders enumeration and search within the policy class infeasible. Instead, we consider generalizing the methodology of online boosting. We define a weak learning algorithm as a mechanism that guarantees multiplicatively approximate regret against a base class of experts. In this access model, we give an efficient boosting algorithm that guarantees near-optimal regret against the convex hull of the base class. We consider both full and partial (a.k.a. bandit) information feedback models. We also give an analogous efficient boosting algorithm for the i.i.d. statistical setting. Our results simultaneously generalize online boosting and gradient boosting guarantees to contextual learning model, online convex optimization and bandit linear optimization settings.

0
0
下载
预览

As learning solutions reach critical applications in social, industrial, and medical domains, the need to curtail their behavior has become paramount. There is now ample evidence that without explicit tailoring, learning can lead to biased, unsafe, and prejudiced solutions. To tackle these problems, we develop a generalization theory of constrained learning based on the probably approximately correct (PAC) learning framework. In particular, we show that imposing requirements does not make a learning problem harder in the sense that any PAC learnable class is also PAC constrained learnable using a constrained counterpart of the empirical risk minimization (ERM) rule. For typical parametrized models, however, this learner involves solving a constrained non-convex optimization program for which even obtaining a feasible solution is challenging. To overcome this issue, we prove that under mild conditions the empirical dual problem of constrained learning is also a PAC constrained learner that now leads to a practical constrained learning algorithm based solely on solving unconstrained problems. We analyze the generalization properties of this solution and use it to illustrate how constrained learning can address problems in fair and robust classification.

0
0
下载
预览

We propose a novel approach for Bayesian optimization, called $\textsf{GP-DC}$, which combines Gaussian processes with distance correlation. It balances exploration and exploitation automatically, and requires no manual parameter tuning. We evaluate $\textsf{GP-DC}$ on a number of benchmark functions and observe that it outperforms state-of-the-art methods such as $\textsf{GP-UCB}$ and max-value entropy search, as well as the classical expected improvement heuristic. We also apply $\textsf{GP-DC}$ to optimize sequential integral observations with a variable integration range and verify its empirical efficiency on both synthetic and real-world datasets.

0
0
下载
预览

Non-Euclidean data that are indexed with a scalar predictor such as time are increasingly encountered in data applications, while statistical methodology and theory for such random objects are not well developed yet. To address the need for new methodology in this area, we develop a total variation regularization technique for nonparametric Fr\'echet regression, which refers to a regression setting where a response residing in a metric space is paired with a scalar predictor and the target is a conditional Fr\'echet mean. Specifically, we seek to approximate an unknown metric-space valued function by an estimator that minimizes the Fr\'echet version of least squares and at the same time has small total variation, appropriately defined for metric-space valued objects. We show that the resulting estimator is representable by a piece-wise constant function and establish the minimax convergence rate of the proposed estimator for metric data objects that reside in Hadamard spaces. We illustrate the numerical performance of the proposed method for both simulated and real data, including metric spaces of symmetric positive-definite matrices with the affine-invariant distance, of probability distributions on the real line with the Wasserstein distance, and of phylogenetic trees with the Billera--Holmes--Vogtmann metric.

0
0
下载
预览

Many meta-learning approaches for few-shot learning rely on simple base learners such as nearest-neighbor classifiers. However, even in the few-shot regime, discriminatively trained linear predictors can offer better generalization. We propose to use these predictors as base learners to learn representations for few-shot learning and show they offer better tradeoffs between feature size and performance across a range of few-shot recognition benchmarks. Our objective is to learn feature embeddings that generalize well under a linear classification rule for novel categories. To efficiently solve the objective, we exploit two properties of linear classifiers: implicit differentiation of the optimality conditions of the convex problem and the dual formulation of the optimization problem. This allows us to use high-dimensional embeddings with improved generalization at a modest increase in computational overhead. Our approach, named MetaOptNet, achieves state-of-the-art performance on miniImageNet, tieredImageNet, CIFAR-FS, and FC100 few-shot learning benchmarks. Our code is available at https://github.com/kjunelee/MetaOptNet.

0
4
下载
预览

The key issue of few-shot learning is learning to generalize. In this paper, we propose a large margin principle to improve the generalization capacity of metric based methods for few-shot learning. To realize it, we develop a unified framework to learn a more discriminative metric space by augmenting the softmax classification loss function with a large margin distance loss function for training. Extensive experiments on two state-of-the-art few-shot learning models, graph neural networks and prototypical networks, show that our method can improve the performance of existing models substantially with very little computational overhead, demonstrating the effectiveness of the large margin principle and the potential of our method.

0
9
下载
预览
小贴士
相关论文
Saeed Akhoondian Amiri
0+阅读 · 2021年2月22日
Sara Ines Rizo Rodriguez,Francisco de Assis Tenorio de Carvalho
0+阅读 · 2021年2月18日
Acoustic Structure Inverse Design and Optimization Using Deep Learning
Xuecong Sun,Han Jia,Yuzhen Yang,Han Zhao,Yafeng Bi,Zhaoyong Sun,Jun Yang
0+阅读 · 2021年2月18日
A Probabilistic Model for Discriminative and Neuro-Symbolic Semi-Supervised Learning
Carl Allen,Ivana Balažević,Timothy Hospedales
0+阅读 · 2021年2月18日
Elad Hazan,Karan Singh
0+阅读 · 2021年2月18日
Luiz F. O. Chamon,Alejandro Ribeiro
0+阅读 · 2021年2月17日
Takuya Kanazawa
0+阅读 · 2021年2月17日
Zhenhua Lin,Hans-Georg Müller
0+阅读 · 2021年2月17日
Kwonjoon Lee,Subhransu Maji,Avinash Ravichandran,Stefano Soatto
4+阅读 · 2019年4月23日
Yong Wang,Xiao-Ming Wu,Qimai Li,Jiatao Gu,Wangmeng Xiang,Lei Zhang,Victor O. K. Li
9+阅读 · 2018年7月8日
相关资讯
Transferring Knowledge across Learning Processes
CreateAMind
8+阅读 · 2019年5月18日
逆强化学习-学习人先验的动机
CreateAMind
6+阅读 · 2019年1月18日
强化学习的Unsupervised Meta-Learning
CreateAMind
7+阅读 · 2019年1月7日
Unsupervised Learning via Meta-Learning
CreateAMind
32+阅读 · 2019年1月3日
A Technical Overview of AI & ML in 2018 & Trends for 2019
待字闺中
10+阅读 · 2018年12月24日
Disentangled的假设的探讨
CreateAMind
8+阅读 · 2018年12月10日
disentangled-representation-papers
CreateAMind
24+阅读 · 2018年9月12日
Hierarchical Disentangled Representations
CreateAMind
3+阅读 · 2018年4月15日
【论文】变分推断(Variational inference)的总结
机器学习研究会
24+阅读 · 2017年11月16日
Auto-Encoding GAN
CreateAMind
5+阅读 · 2017年8月4日
Top
微信扫码咨询专知VIP会员