We consider the least-squares variational kernel-based methods for numerical solution of partial differential equations. Indeed, we focus on least-squares principles to develop meshfree methods to find the numerical solution of a general second order ADN elliptic boundary value problem in domain $\Omega \subset \mathbb{R}^d$ under Dirichlet boundary conditions. Most notably, in these principles it is not assumed that differential operator is self-adjoint or positive definite as it would have to be in the Rayleigh-Ritz setting. However, the new scheme leads to a symmetric and positive definite algebraic system allowing us to circumvent the compatibility conditions arising in standard and mixed-Galerkin methods. In particular, the resulting method does not require certain subspaces satisfying any boundary condition. The trial space for discretization is provided via standard kernels that reproduce $H^\tau(\Omega)$, $\tau>d/2$, as their native spaces. Therefore, the smoothness of the approximation functions can be arbitrary increased without any additional task. The solvability of the scheme is proved and the error estimates are derived for functions in appropriate Sobolev spaces. For the weighted discrete least-squares principles, we show that the optimal rate of convergence in $L^2(\Omega)$ is accessible. Furthermore, for $d \le 3$, the proposed method has optimal rate of convergence in $H^k(\Omega)$ whenever $k \le \tau$. The condition number of the final linear system is approximated in terms of discterization quality. Finally, the results of some computational experiments support the theoretical error bounds.

0
下载
关闭预览

相关内容

This article studies a priori error analysis for linear parabolic interface problems with measure data in time in a bounded convex polygonal domain in $\mathbb{R}^2$. We have used the standard continuous fitted finite element discretization for the space. Due to the low regularity of the data of the problem, the solution possesses very low regularity in the entire domain. A priori error bound in the $L^2(L^2(\Omega))$-norm for the spatially discrete finite element approximations are derived under minimal regularity with the help of the $L^2$ projection operators and the duality argument. The interfaces are assumed to be smooth for our purpose.

0
0
下载
预览

We study the complexity of optimizing highly smooth convex functions. For a positive integer $p$, we want to find an $\epsilon$-approximate minimum of a convex function $f$, given oracle access to the function and its first $p$ derivatives, assuming that the $p$th derivative of $f$ is Lipschitz. Recently, three independent research groups (Jiang et al., PLMR 2019; Gasnikov et al., PLMR 2019; Bubeck et al., PLMR 2019) developed a new algorithm that solves this problem with $\tilde{O}(1/\epsilon^{\frac{2}{3p+1}})$ oracle calls for constant $p$. This is known to be optimal (up to log factors) for deterministic algorithms, but known lower bounds for randomized algorithms do not match this bound. We prove a new lower bound that matches this bound (up to log factors), and holds not only for randomized algorithms, but also for quantum algorithms.

0
0
下载
预览

We study the optimization landscape and the stability properties of training problems with squared loss for neural networks and general nonlinear conic approximation schemes. It is demonstrated that, if a nonlinear conic approximation scheme is considered that is (in an appropriately defined sense) more expressive than a classical linear approximation approach and if there exist unrealizable label vectors, then a training problem with squared loss is necessarily unstable in the sense that its solution set depends discontinuously on the label vector in the training data. We further prove that the same effects that are responsible for these instability properties are also the reason for the emergence of saddle points and spurious local minima, which may be arbitrarily far away from global solutions, and that neither the instability of the training problem nor the existence of spurious local minima can, in general, be overcome by adding a regularization term to the objective function that penalizes the size of the parameters in the approximation scheme. The latter results are shown to be true regardless of whether the assumption of realizability is satisfied or not. We demonstrate that our analysis in particular applies to training problems for free-knot interpolation schemes and deep and shallow neural networks with variable widths that involve an arbitrary mixture of various activation functions (e.g., binary, sigmoid, tanh, arctan, soft-sign, ISRU, soft-clip, SQNL, ReLU, leaky ReLU, soft-plus, bent identity, SILU, ISRLU, and ELU). In summary, the findings of this paper illustrate that the improved approximation properties of neural networks and general nonlinear conic approximation instruments are linked in a direct and quantifiable way to undesirable properties of the optimization problems that have to be solved in order to train them.

0
0
下载
预览

We consider an optimal control problem for the steady-state Kirchhoff equation, a prototype for nonlocal partial differential equations, different from fractional powers of closed operators. Existence and uniqueness of solutions of the state equation, existence of global optimal solutions, differentiability of the control-to-state map and first-order necessary optimality conditions are established. The aforementioned results require the controls to be functions in $H^1$ and subject to pointwise upper and lower bounds. In order to obtain the Newton differentiability of the optimality conditions, we employ a Moreau-Yosida-type penalty approach to treat the control constraints and study its convergence. The first-order optimality conditions of the regularized problems are shown to be Newton diffentiable, and a generalized Newton method is detailed. A discretization of the optimal control problem by piecewise linear finite elements is proposed and numerical results are presented.

0
0
下载
预览

This work derives explicit series reversions for the solution of Calder\'on's problem. The governing elliptic partial differential equation is $\nabla\cdot(A\nabla u)=0$ in a bounded Lipschitz domain and with a matrix-valued coefficient. The corresponding forward map sends $A$ to a projected version of a local Neumann-to-Dirichlet operator, allowing for the use of partial boundary data and finitely many measurements. It is first shown that the forward map is analytic, and subsequently reversions of its Taylor series up to specified orders lead to a family of numerical methods for solving the inverse problem with increasing accuracy. The convergence of these methods is shown under conditions that ensure the invertibility of the Fr\'echet derivative of the forward map. The introduced numerical methods are of the same computational complexity as solving the linearised inverse problem. The analogous results are also presented for the smoothened complete electrode model.

0
0
下载
预览

We introduce a novel minimal order hybrid Discontinuous Galerkin (HDG) and a novel mass conserving mixed stress (MCS) method for the approximation of incompressible flows. For this we employ the $H(\operatorname{div})$-conforming linear Brezzi-Douglas-Marini space and the lowest order Raviart-Thomas space for the approximation of the velocity and the vorticity, respectively. Our methods are based on the physically correct diffusive flux $-\nu \varepsilon(u)$ and provide exactly divergence-free discrete velocity solutions, optimal (pressure robust) error estimates and a minimal number of coupling degrees of freedom. For the stability analysis we introduce a new Korn-like inequality for vector-valued element-wise $H^1$ and normal continuous functions. Numerical examples conclude the work where the theoretical findings are validated and the novel methods are compared in terms of condition numbers with respect to discrete stability parameters.

0
0
下载
预览

The Bayesian solution to a statistical inverse problem can be summarised by a mode of the posterior distribution, i.e. a MAP estimator. The MAP estimator essentially coincides with the (regularised) variational solution to the inverse problem, seen as minimisation of the Onsager-Machlup functional of the posterior measure. An open problem in the stability analysis of inverse problems is to establish a relationship between the convergence properties of solutions obtained by the variational approach and by the Bayesian approach. To address this problem, we propose a general convergence theory for modes that is based on the $\Gamma$-convergence of Onsager-Machlup functionals, and apply this theory to Bayesian inverse problems with Gaussian and edge-preserving Besov priors. Part II of this paper considers more general prior distributions.

0
0
下载
预览

The aim of this paper is to apply a high-order discontinuous-in-time scheme to second-order hyperbolic partial differential equations (PDEs). We first discretize the PDEs in time while keeping the spatial differential operators undiscretized. The well-posedness of this semi-discrete scheme is analyzed and a priori error estimates are derived in the energy norm. We then combine this $hp$-version discontinuous Galerkin method for temporal discretization with an $H^1$-conforming finite element approximation for the spatial variables to construct a fully discrete scheme. A prior error estimates are derived both in the energy norm and the $L^2$-norm. Numerical experiments are presented to verify the theoretical results.

0
0
下载
预览

We present an hp-adaptive virtual element method (VEM) based on the hypercircle method of Prager and Synge for the approximation of solutions to diffusion problems. We introduce a reliable and efficient a posteriori error estimator, which is computed by solving an auxiliary global mixed problem. We show that the mixed VEM satisfies a discrete inf-sup condition, with inf-sup constant independent of the discretization parameters. Furthermore, we construct a stabilization for the mixed VEM, with explicit bounds in terms of the local degree of accuracy of the method. The theoretical results are supported by several numerical experiments, including a comparison with the residual a posteriori error estimator. The numerics exhibit the p-robustness of the proposed error estimator. In addition, we provide a first step towards the localized flux reconstruction in the virtual element framework, which leads to an additional reliable a posteriori error estimator that is computed by solving local (cheap-to-solve and parallelizable) mixed problems. We provide theoretical and numerical evidence that the proposed local error estimator suffers from a lack of efficiency.

0
0
下载
预览

We prove necessary density conditions for sampling in spectral subspaces of a second order uniformly elliptic differential operator on $R^d$ with slowly oscillating symbol. For constant coefficient operators, these are precisely Landaus necessary density conditions for bandlimited functions, but for more general elliptic differential operators it has been unknown whether such a critical density even exists. Our results prove the existence of a suitable critical sampling density and compute it in terms of the geometry defined by the elliptic operator. In dimension 1, functions in a spectral subspace can be interpreted as functions with variable bandwidth, and we obtain a new critical density for variable bandwidth. The methods are a combination of the spectral theory and the regularity theory of elliptic partial differential operators, some elements of limit operators, certain compactifications of $R^d $, and the theory of reproducing kernel Hilbert spaces.

0
0
下载
预览
小贴士
相关主题
相关论文
Ankit Garg,Robin Kothari,Praneeth Netrapalli,Suhail Sherif
0+阅读 · 12月2日
Masoumeh Hashemi,Roland Herzog,Thomas M. Surowiec
0+阅读 · 12月2日
Henrik Garde,Nuutti Hyvönen
0+阅读 · 12月1日
Jay Gopalakrishnan,Lukas Kogler,Philip L. Lederer,Joachim Schöberl
0+阅读 · 11月30日
Franco Dassi,Joscha Gedicke,Lorenzo Mascotto
0+阅读 · 11月27日
相关VIP内容
专知会员服务
37+阅读 · 3月16日
【SIGGRAPH2019】TensorFlow 2.0深度学习计算机图形学应用
专知会员服务
17+阅读 · 2019年10月9日
相关资讯
已删除
将门创投
3+阅读 · 2019年9月4日
Unsupervised Learning via Meta-Learning
CreateAMind
32+阅读 · 2019年1月3日
ERROR: GLEW initalization error: Missing GL version
深度强化学习实验室
4+阅读 · 2018年6月13日
Hierarchical Disentangled Representations
CreateAMind
3+阅读 · 2018年4月15日
【论文】变分推断(Variational inference)的总结
机器学习研究会
24+阅读 · 2017年11月16日
【学习】Hierarchical Softmax
机器学习研究会
3+阅读 · 2017年8月6日
Auto-Encoding GAN
CreateAMind
5+阅读 · 2017年8月4日
Top