** In this work, we present a novel class of parallelizable high-order time integration schemes for the approximate solution of additive ODEs. The methods achieve high order through a combination of a suitable quadrature formula involving multiple derivatives of the ODE's right-hand side and a predictor-corrector ansatz. The latter approach is designed in such a way that parallelism in time is made possible. We present thorough analysis as well as numerical results that showcase scaling opportunities of methods from this class of solvers. **

Integration：Integration, the VLSI Journal。
Explanation：集成，VLSI杂志。
Publisher：Elsevier。
SIT：http://dblp.uni-trier.de/db/journals/integration/

** Lattice-Boltzmann methods are known for their simplicity, efficiency and ease of parallelization, usually relying on uniform Cartesian meshes with a strong bond between spatial and temporal discretization. This fact complicates the crucial issue of reducing the computational cost and the memory impact by automatically coarsening the grid where a fine mesh is unnecessary, still ensuring the overall quality of the numerical solution through error control. This work provides a possible answer to this interesting question, by connecting, for the first time, the field of lattice-Boltzmann Methods (LBM) to the adaptive multiresolution (MR) approach based on wavelets. To this end, we employ a MR multi-scale transform to adapt the mesh as the solution evolves in time according to its local regularity. The collision phase is not affected due to its inherent local nature and because we do not modify the speed of the sound, contrarily to most of the LBM/Adaptive Mesh Refinement (AMR) strategies proposed in literature, thus preserving the original structure of any LBM scheme. Besides, an original use of the MR allows the scheme to resolve the proper physics by efficiently controlling the accuracy of the transport phase. We carefully test our method to conclude on its adaptability to a wide family of existing lattice Boltzmann schemes, treating both hyperbolic and parabolic systems of equation, thus being less problem-dependent than the AMR approaches, which have a hard time granting an effective control on the error. The ability of the method to yield a very efficient compression rate and thus a computational cost reduction for solutions involving localized structures with loss of regularity is also shown, while guaranteeing a precise control on the approximation error introduced by the spatial adaptation of the mesh. The numerical strategy is implemented on a specific open-source platform called SAMURAI with a dedicated data-structure relying on set algebra. **

** We analyze and optimize two-level methods applied to a symmetric interior penalty discontinuous Galerkin finite element discretization of a singularly perturbed reaction-diffusion equation. Previous analyses of such methods have been performed numerically by Hemker et. al. for the Poisson problem. Our main innovation is that we obtain explicit formulas for the optimal relaxation parameter of the two-level method for the Poisson problem in 1D, and very accurate closed form approximation formulas for the optimal choice in the reaction-diffusion case in all regimes. Our Local Fourier Analysis, which we perform at the matrix level to make it more accessible to the linear algebra community, shows that for DG penalization parameter values used in practice, it is better to use cell block-Jacobi smoothers of Schwarz type, in contrast to earlier results suggesting that point block-Jacobi smoothers are preferable, based on a smoothing analysis alone. Our analysis also reveals how the performance of the iterative solver depends on the DG penalization parameter, and what value should be chosen to get the fastest iterative solver, providing a new, direct link between DG discretization and iterative solver performance. We illustrate our analysis with numerical experiments and comparisons in higher dimensions and different geometries. **

** An efficient compression technique based on hierarchical tensors for popular option pricing methods is presented. It is shown that the "curse of dimensionality" can be alleviated for the computation of Bermudan option prices with the Monte Carlo least-squares approach as well as the dual martingale method, both using high-dimensional tensorized polynomial expansions. This discretization allows for a simple and computationally cheap evaluation of conditional expectations. Complexity estimates are provided as well as a description of the optimization procedures in the tensor train format. Numerical experiments illustrate the favourable accuracy of the proposed methods. The dynamical programming method yields results comparable to recent Neural Network based methods. **

** We present a symbolic algorithmic approach that allows to compute invariant manifolds and corresponding reduced systems for differential equations modeling biological networks which comprise chemical reaction networks for cellular biochemistry, and compartmental models for pharmacology, epidemiology and ecology. Multiple time scales of a given network are obtained by scaling, based on tropical geometry. Our reduction is mathematically justified within a singular perturbation setting. The existence of invariant manifolds is subject to hyperbolicity conditions, for which we propose an algorithmic test based on Hurwitz criteria. We finally obtain a sequence of nested invariant manifolds and respective reduced systems on those manifolds. Our theoretical results are generally accompanied by rigorous algorithmic descriptions suitable for direct implementation based on existing off-the-shelf software systems, specifically symbolic computation libraries and Satisfiability Modulo Theories solvers. We present computational examples taken from the well-known BioModels database using our own prototypical implementations. **

** We prove the large-dimensional Gaussian approximation of a sum of $n$ independent random vectors in $\mathbb{R}^d$ together with fourth-moment error bounds on convex sets and Euclidean balls. We show that compared with classical third-moment bounds, our bounds have near-optimal dependence on $n$ and can achieve improved dependence on the dimension $d$. For centered balls, we obtain an additional error bound that has a sub-optimal dependence on $n$, but recovers the known result of the validity of the Gaussian approximation if and only if $d=o(n)$. We discuss an application to the bootstrap. We prove our main results using Stein's method. **

** We consider the problem of computing homogeneous coordinates of points in a zero-dimensional subscheme of a compact, complex toric variety $X$. Our starting point is a homogeneous ideal $I$ in the Cox ring of $X$, which in practice might arise from homogenizing a sparse polynomial system. We prove a new eigenvalue theorem in the toric compact setting, which leads to a novel, robust numerical approach for solving this problem. Our method works in particular for systems having isolated solutions with arbitrary multiplicities. It depends on the multigraded regularity properties of $I$. We study these properties and provide bounds on the size of the matrices involved in our approach in the case where $I$ is a complete intersection. **

** Single-stage or single-step high-order temporal discretizations of partial differential equations (PDEs) have shown great promise in delivering high-order accuracy in time with efficient use of computational resources. There has been much success in developing such methods for finite volume method (FVM) discretizations of PDEs. The Picard Integral formulation (PIF) has recently made such single-stage temporal methods accessible for finite difference method (FDM) discretizations. PIF methods rely on the so-called Lax-Wendroff procedures to tightly couple spatial and temporal derivatives through the governing PDE system to construct high-order Taylor series expansions in time. Going to higher than third order in time requires the calculation of Jacobian-like derivative tensor-vector contractions of an increasingly larger degree, greatly adding to the complexity of such schemes. To that end, we present in this paper a method for calculating these tensor contractions through a recursive application of a discrete Jacobian operator that readily and efficiently computes the needed contractions entirely agnostic of the system of partial differential equations (PDEs) being solved. **

** Robust estimation is much more challenging in high dimensions than it is in one dimension: Most techniques either lead to intractable optimization problems or estimators that can tolerate only a tiny fraction of errors. Recent work in theoretical computer science has shown that, in appropriate distributional models, it is possible to robustly estimate the mean and covariance with polynomial time algorithms that can tolerate a constant fraction of corruptions, independent of the dimension. However, the sample and time complexity of these algorithms is prohibitively large for high-dimensional applications. In this work, we address both of these issues by establishing sample complexity bounds that are optimal, up to logarithmic factors, as well as giving various refinements that allow the algorithms to tolerate a much larger fraction of corruptions. Finally, we show on both synthetic and real data that our algorithms have state-of-the-art performance and suddenly make high-dimensional robust estimation a realistic possibility. **