We study approximation by arbitrary linear combinations of $n$ translates of a single function of periodic functions. We construct some linear methods of this approximation for univariate functions in the class induced by the convolution with a single function, and prove upper bounds of the $L^p$-approximation convergence rate by these methods, when $n \to \infty$, for $1 \leq p \leq \infty$. We also generalize these results to classes of multivariate functions defined the convolution with the tensor product of a single function. In the case $p=2$, for this class, we also prove a lower bound of the quantity characterizing best approximation of by arbitrary linear combinations of $n$ translates of arbitrary function.

0
下载
关闭预览

相关内容

在数学(特别是功能分析)中,卷积是对两个函数(f和g)的数学运算,产生三个函数,表示第一个函数的形状如何被另一个函数修改。 卷积一词既指结果函数,又指计算结果的过程。 它定义为两个函数的乘积在一个函数反转和移位后的积分。 并针对所有shift值评估积分,从而生成卷积函数。

This note investigates functions from $\mathbb{R}^d$ to $\mathbb{R} \cup \{\pm \infty\}$ that satisfy axioms of linearity wherever allowed by extended-value arithmetic. They have a nontrivial structure defined inductively on $d$, and unlike finite linear functions, they require $\Omega(d^2)$ parameters to uniquely identify. In particular they can capture vertical tangent planes to epigraphs: a function (never $-\infty$) is convex if and only if it has an extended-valued subgradient at every point in its effective domain, if and only if it is the supremum of a family of "affine extended" functions. These results are applied to the well-known characterization of proper scoring rules, for the finite-dimensional case: it is carefully and rigorously extended here to a more constructive form. In particular it is investigated when proper scoring rules can be constructed from a given convex function.

0
0
下载
预览

Reinforcement learning (RL) with linear function approximation has received increasing attention recently. However, existing work has focused on obtaining $\sqrt{T}$-type regret bound, where $T$ is the number of interactions with the MDP. In this paper, we show that logarithmic regret is attainable under two recently proposed linear MDP assumptions provided that there exists a positive sub-optimality gap for the optimal action-value function. More specifically, under the linear MDP assumption (Jin et al. 2019), the LSVI-UCB algorithm can achieve $\tilde{O}(d^{3}H^5/\text{gap}_{\text{min}}\cdot \log(T))$ regret; and under the linear mixture MDP assumption (Ayoub et al. 2020), the UCRL-VTR algorithm can achieve $\tilde{O}(d^{2}H^5/\text{gap}_{\text{min}}\cdot \log^3(T))$ regret, where $d$ is the dimension of feature mapping, $H$ is the length of episode, $\text{gap}_{\text{min}}$ is the minimal sub-optimality gap, and $\tilde O$ hides all logarithmic terms except $\log(T)$. To the best of our knowledge, these are the first logarithmic regret bounds for RL with linear function approximation. We also establish gap-dependent lower bounds for the two linear MDP models.

0
0
下载
预览

We study the reinforcement learning for finite-horizon episodic Markov decision processes with adversarial reward and full information feedback, where the unknown transition probability function is a linear function of a given feature mapping. We propose an optimistic policy optimization algorithm with Bernstein bonus and show that it can achieve $\tilde{O}(dH\sqrt{T})$ regret, where $H$ is the length of the episode, $T$ is the number of interaction with the MDP and $d$ is the dimension of the feature mapping. Furthermore, we also prove a matching lower bound of $\tilde{\Omega}(dH\sqrt{T})$ up to logarithmic factors. To the best of our knowledge, this is the first computationally efficient, nearly minimax optimal algorithm for adversarial Markov decision processes with linear function approximation.

0
0
下载
预览

We provide a lower bound showing that the $O(1/k)$ convergence rate of the NoLips method (a.k.a. Bregman Gradient) is optimal for the class of functions satisfying the $h$-smoothness assumption. This assumption, also known as relative smoothness, appeared in the recent developments around the Bregman Gradient method, where acceleration remained an open issue. On the way, we show how to constructively obtain the corresponding worst-case functions by extending the computer-assisted performance estimation framework of Drori and Teboulle (Mathematical Programming, 2014) to Bregman first-order methods, and to handle the classes of differentiable and strictly convex functions.

0
0
下载
预览

We propose a novel technique for faster DNN training which systematically applies sample-based approximation to the constituent tensor operations, i.e., matrix multiplications and convolutions. We introduce new sampling techniques, study their theoretical properties, and prove that they provide the same convergence guarantees when applied to SGD DNN training. We apply approximate tensor operations to single and multi-node training of MLP and CNN networks on MNIST, CIFAR-10 and ImageNet datasets. We demonstrate up to 66% reduction in the amount of computations and communication, and up to 1.37x faster training time while maintaining negligible or no impact on the final test accuracy.

0
0
下载
预览

We study the extent to which wide neural networks may be approximated by Gaussian processes when initialized with random weights. It is a well-established fact that as the width of a network goes to infinity, its law converges to that of a Gaussian process. We make this quantitative by establishing explicit convergence rates for the central limit theorem in an infinite-dimensional functional space, metrized with a natural transportation distance. We identify two regimes of interest; when the activation function is polynomial, its degree determines the rate of convergence, while for non-polynomial activations, the rate is governed by the smoothness of the function.

0
0
下载
预览

We consider the mathematical analysis and numerical approximation of a system of nonlinear partial differential equations that arises in models that have relevance to steady isochoric flows of colloidal suspensions. The symmetric velocity gradient is assumed to be a monotone nonlinear function of the deviatoric part of the Cauchy stress tensor. We prove the existence of a unique weak solution to the problem, and under the additional assumption that the nonlinearity involved in the constitutive relation is Lipschitz continuous we also prove uniqueness of the weak solution. We then construct mixed finite element approximations of the system using both conforming and nonconforming finite element spaces. For both of these we prove the convergence of the method to the unique weak solution of the problem, and in the case of the conforming method we provide a bound on the error between the analytical solution and its finite element approximation in terms of the best approximation error from the finite element spaces. We propose first a Lions-Mercier type iterative method and next a classical fixed-point algorithm to solve the finite-dimensional problems resulting from the finite element discretisation of the system of nonlinear partial differential equations under consideration and present numerical experiments that illustrate the practical performance of the proposed numerical method.

0
0
下载
预览

Divergence-free (div-free) and curl-free vector fields are pervasive in many areas of science and engineering, from fluid dynamics to electromagnetism. A common problem that arises in applications is that of constructing smooth approximants to these vector fields and/or their potentials based only on discrete samples. Additionally, it is often necessary that the vector approximants preserve the div-free or curl-free properties of the field to maintain certain physical constraints. Div/curl-free radial basis functions (RBFs) are a particularly good choice for this application as they are meshfree and analytically satisfy the div-free or curl-free property. However, this method can be computationally expensive due to its global nature. In this paper, we develop a technique for bypassing this issue that combines div/curl-free RBFs in a partition of unity framework, where one solves for local approximants over subsets of the global samples and then blends them together to form a div-free or curl-free global approximant. The method is applicable to div/curl-free vector fields in $\R^2$ and tangential fields on two-dimensional surfaces, such as the sphere, and the curl-free method can be generalized to vector fields in $\R^d$. The method also produces an approximant for the scalar potential of the underlying sampled field. We present error estimates and demonstrate the effectiveness of the method on several test problems.

0
0
下载
预览

We consider an improper reinforcement learning setting where the learner is given M base controllers for an unknown Markov Decision Process, and wishes to combine them optimally to produce a potentially new controller that can outperform each of the base ones. We propose a gradient-based approach that operates over a class of improper mixtures of the controllers. The value function of the mixture and its gradient may not be available in closed-form; however, we show that we can employ rollouts and simultaneous perturbation stochastic approximation (SPSA) for explicit gradient descent optimization. We derive convergence and convergence rate guarantees for the approach assuming access to a gradient oracle. Numerical results on a challenging constrained queueing task show that our improper policy optimization algorithm can stabilize the system even when each constituent policy at its disposal is unstable.

0
0
下载
预览

We derive the optimal signed variable in general case kernels for the classical statistic density estimation, which are some generalization of the famous Epanechnikov's ones.

0
0
下载
预览
小贴士
相关论文
Bo Waggoner
0+阅读 · 2021年2月18日
Jiafan He,Dongruo Zhou,Quanquan Gu
0+阅读 · 2021年2月18日
Jiafan He,Dongruo Zhou,Quanquan Gu
0+阅读 · 2021年2月17日
Radu-Alexandru Dragomir,Adrien Taylor,Alexandre d'Aspremont,Jérôme Bolte
0+阅读 · 2021年2月17日
Menachem Adelman,Kfir Y. Levy,Ido Hakimi,Mark Silberstein
0+阅读 · 2021年2月17日
Ronen Eldan,Dan Mikulincer,Tselil Schramm
0+阅读 · 2021年2月17日
Andrea Bonito,Vivette Girault,Diane Guignard,Kumbakonam R. Rajagopal,Endre Süli
0+阅读 · 2021年2月17日
Kathryn P. Drake,Edward J. Fuselier,Grady B. Wright
0+阅读 · 2021年2月16日
Improper Learning with Gradient-based Policy Optimization
Mohammadi Zaki,Avinash Mohan,Aditya Gopalan,Shie Mannor
0+阅读 · 2021年2月16日
M. R. Formica,E. Ostrovsky,L. Sirota
0+阅读 · 2021年2月15日
相关VIP内容
专知会员服务
38+阅读 · 2020年12月14日
专知会员服务
65+阅读 · 2020年11月20日
专知会员服务
74+阅读 · 2020年8月2日
专知会员服务
88+阅读 · 2020年6月2日
Fariz Darari简明《博弈论Game Theory》介绍,35页ppt
专知会员服务
73+阅读 · 2020年5月15日
【新书】Python编程基础,669页pdf
专知会员服务
115+阅读 · 2019年10月10日
相关资讯
异常检测(Anomaly Detection)综述
极市平台
13+阅读 · 2020年10月24日
Transferring Knowledge across Learning Processes
CreateAMind
8+阅读 · 2019年5月18日
强化学习的Unsupervised Meta-Learning
CreateAMind
7+阅读 · 2019年1月7日
计算机视觉的不同任务
专知
4+阅读 · 2018年8月27日
车辆目标检测
数据挖掘入门与实战
29+阅读 · 2018年3月30日
Adversarial Variational Bayes: Unifying VAE and GAN 代码
CreateAMind
7+阅读 · 2017年10月4日
强化学习族谱
CreateAMind
11+阅读 · 2017年8月2日
强化学习 cartpole_a3c
CreateAMind
9+阅读 · 2017年7月21日
Top
微信扫码咨询专知VIP会员