Experimental designs based on the classical D-optimal criterion minimize the volume of the linear-approximation inference regions for the parameters using local sensitivity coefficients. For nonlinear models, these designs can be unreliable because the linearized inference regions do not always provide a true indication of the exact parameter inference regions. In this article, we apply the profile-based sensitivity coefficients developed by Sulieman et.al. [12] in designing D-optimal experiments for parameter estimation in some selected nonlinear models. Profile-based sensitivity coefficients are defined by the total derivative of the model function with respect to the parameters. They have been shown to account for both parameter co-dependencies and model nonlinearity up to second order-derivative. This work represents a first attempt to construct experiments using profile-based sensitivity coefficients. Two common nonlinear models are used to illustrate the computational aspects of the profile-based designs and simulation studies are conducted to demonstrate the efficiency of the constructed experiments.

### 相关内容

ACM/IEEE第23届模型驱动工程语言和系统国际会议，是模型驱动软件和系统工程的首要会议系列，由ACM-SIGSOFT和IEEE-TCSE支持组织。自1998年以来，模型涵盖了建模的各个方面，从语言和方法到工具和应用程序。模特的参加者来自不同的背景，包括研究人员、学者、工程师和工业专业人士。MODELS 2019是一个论坛，参与者可以围绕建模和模型驱动的软件和系统交流前沿研究成果和创新实践经验。今年的版本将为建模社区提供进一步推进建模基础的机会，并在网络物理系统、嵌入式系统、社会技术系统、云计算、大数据、机器学习、安全、开源等新兴领域提出建模的创新应用以及可持续性。 官网链接：http://www.modelsconference.org/

The filters work in many areas of technology. There constructions are different and substances under filtration are different. It is necessary in some cases to take into account forming of sediments on the walls of the filter since they can change properties of the filter or blind the filtering apertures at all. I construct a mathematical model of sedimentation growth on the walls of the porous filter in this article. Analytical investigation is present in the article and numeric results too. There are formulas for dependencies of concentration near the walls on inner concentration in liquid in the article. Flow speed, calculated time of work, purification efficiency and other parameters proved to be important factors. Differing of radiuses of apertures from membrane to membrane can make contamination equal along the filter. Numerical results show importance of preliminary calculation of the filter for the purpose it will serve. Forming of calcic sediment is an investigated example of chemical reaction.

In this paper we carry out numerical analysis for a family of simplified gas transport models with hydrate formation and dissociation in subsurface, in equilibrium and non-equilibrium conditions. These models are adequate for simulation of hydrate phase change at basin and at shorter time scales, but the analysis does not account directly for the related effects of evolving hydraulic properties. To our knowledge this is the first analysis of such a model. It is carried out for the transport steps while keeping the pressure solution fixed. We frame the transport model as conservation law with a non-smooth space-dependent flux function; the kinetic model approximates this equilibrium. We prove weak stability of the upwind scheme applied to the regularized conservation law. We illustrate the model, confirm convergence with numerical simulations, and illustrate its use for some relevant equilibrium and non-equilibrium scenarios.

This paper considers the attenuated Westervelt equation in pressure formulation. The attenuation is by various models proposed in the literature and characterised by the inclusion of non-local operators that give power law damping as opposed to the exponential of classical models. The goal is the inverse problem of recovering a spatially dependent coefficient in the equation, the parameter of nonlinearity $\kappa(x)$, in what becomes a nonlinear hyperbolic equation with nonlocal terms. The overposed measured data is a time trace taken on a subset of the domain or its boundary. We shall show injectivity of the linearised map from $\kappa$ to the overposed data used to recover it and from this basis develop and analyse Newton-type schemes for its effective recovery.

This paper provides the first sample complexity lower bounds for the estimation of simple diffusion models, including the Bass model (used in modeling consumer adoption) and the SIR model (used in modeling epidemics). We show that one cannot hope to learn such models until quite late in the diffusion. Specifically, we show that the time required to collect a number of observations that exceeds our sample complexity lower bounds is large. For Bass models with low innovation rates, our results imply that one cannot hope to predict the eventual number of adopting customers until one is at least two-thirds of the way to the time at which the rate of new adopters is at its peak. In a similar vein, our results imply that in the case of an SIR model, one cannot hope to predict the eventual number of infections until one is approximately two-thirds of the way to the time at which the infection rate has peaked. These limits are borne out in both product adoption data (Amazon), as well as epidemic data (COVID-19).

We present a rigorous convergence analysis for cylindrical approximations of nonlinear functionals, functional derivatives, and functional differential equations (FDEs). The purpose of this analysis is twofold: first, we prove that continuous nonlinear functionals, functional derivatives and FDEs can be approximated uniformly on any compact subset of a real Banach space admitting a basis by high-dimensional multivariate functions and high-dimensional partial differential equations (PDEs), respectively. Second, we show that the convergence rate of such functional approximations can be exponential, depending on the regularity of the functional (in particular its Fr\'echet differentiability), and its domain. We also provide necessary and sufficient conditions for consistency, stability and convergence of cylindrical approximations to linear FDEs. These results open the possibility to utilize numerical techniques for high-dimensional systems such as deep neural networks and numerical tensor methods to approximate nonlinear functionals in terms of high-dimensional functions, and compute approximate solutions to FDEs by solving high-dimensional PDEs. Numerical examples are presented and discussed for prototype nonlinear functionals and for an initial value problem involving a linear FDE.

Flexible estimation of multiple conditional quantiles is of interest in numerous applications, such as studying the effect of pregnancy-related factors on very low or high birth weight. We propose a Bayesian non-parametric method to simultaneously estimate non-crossing, non-linear quantile curves. We expand the conditional distribution function of the response in I-spline basis functions where the covariate-dependent coefficients are modeled using neural networks. By leveraging the approximation power of splines and neural networks, our model can approximate any continuous quantile function. Compared to existing models, our model estimates all rather than a finite subset of quantiles, scales well to high dimensions, and accounts for estimation uncertainty. While the model is arbitrarily flexible, interpretable marginal quantile effects are estimated using accumulative local effect plots and variable importance measures. A simulation study shows that our model can better recover quantiles of the response distribution when the data is sparse, and illustrative applications providing new insights on analyses of birth weight and tropical cyclone intensity are presented.

On-farm experiments can provide farmers with information on more efficient crop management in their own fields. Developments in precision agricultural technologies, such as yield monitoring and variable-rate application technology, allow farmers to implement on-farm experiments. Research frameworks including the experimental design and the statistical analysis method strongly influences the precision of the experiment. Conventional statistical approaches (e.g., ordinary least squares regression) may not be appropriate for on-farm experiments because they are not capable of accurately accounting for the underlying spatial variation in a particular response variable (e.g., yield data). The effects of experimental designs and statistical approaches on type I error rates and estimation accuracy were explored through a simulation study hypothetically conducted on experiments in three wheat fields in Japan. Isotropic and anisotropic spatial linear mixed models were established for comparison with ordinary least squares regression models. The repeated designs were not sufficient to reduce both the risk of a type I error and the estimation bias on their own. A combination of a repeated design and an anisotropic model is sometimes required to improve the precision of the experiments. Model selection should be performed to determine whether the anisotropic model is required for analysis of any specific field. The anisotropic model had larger standard errors than the other models, especially when the estimates had large biases. This finding highlights an advantage of anisotropic models since they enable experimenters to cautiously consider the reliability of the estimates when they have a large bias.

The Unbounded Subset-Sum Problem (USSP) is defined as: given sum $s$ and a set of integers $W\leftarrow \{p_1,\dots,p_n\}$ output a set of non-negative integers $\{y_1,\dots,y_n\}$ such that $p_1y_1+\dots+p_ny_n=s$. The USSP is an NP-complete problem that does not have any known polynomial-time solution. There is a pseudo-polynomial algorithm for the USSP problem with $O((p_{1})^{2}+n)$ time complexity and $O(p_{1})$ memory complexity, where $p_{1}$ is the smallest element of $W$ \cite{PH}. This algorithm is polynomial in term of the number of inputs, but exponential in the size of $p_1$. Therefore, this solution is impractical for the large-scale problems. In this paper, first we propose an efficient polynomial-time algorithm with $O(n)$ computational complexity for solving the specific case of the USSP where $s> \sum_{i=1}^{k-1}q_iq_{i+1}-q_i-q_{i+1}$, $q_i$'s are the elements of a small subset of $W$ in which $gcd$ of its elements divides $s$ and $2\le k \le n$. Second, we present another algorithm for smaller values of $s$ with $O(n^2)$ computational complexity that finds the answer for some inputs with a probability between $0.5$ to $1$. Its success probability is directly related to the number of subsets of $W$ in which $gcd$ of their elements divides $s$. This algorithm can solve the USSP problem with large inputs in the polynomial-time, no matter how big inputs are, but, in some special cases where $s$ is small, it cannot find the answer.

The contribution of this paper includes two aspects. First, we study the lower bound complexity for the minimax optimization problem whose objective function is the average of $n$ individual smooth component functions. We consider Proximal Incremental First-order (PIFO) algorithms which have access to gradient and proximal oracle for each individual component. We develop a novel approach for constructing adversarial problems, which partitions the tridiagonal matrix of classical examples into $n$ groups. This construction is friendly to the analysis of incremental gradient and proximal oracle. With this approach, we demonstrate the lower bounds of first-order algorithms for finding an $\varepsilon$-suboptimal point and an $\varepsilon$-stationary point in different settings. Second, we also derive the lower bounds of minimization optimization with PIFO algorithms from our approach, which can cover the results in \citep{woodworth2016tight} and improve the results in \citep{zhou2019lower}.

We introduce a new family of deep neural network models. Instead of specifying a discrete sequence of hidden layers, we parameterize the derivative of the hidden state using a neural network. The output of the network is computed using a black-box differential equation solver. These continuous-depth models have constant memory cost, adapt their evaluation strategy to each input, and can explicitly trade numerical precision for speed. We demonstrate these properties in continuous-depth residual networks and continuous-time latent variable models. We also construct continuous normalizing flows, a generative model that can train by maximum likelihood, without partitioning or ordering the data dimensions. For training, we show how to scalably backpropagate through any ODE solver, without access to its internal operations. This allows end-to-end training of ODEs within larger models.

Barbara Kaltenbacher,William Rundell
0+阅读 · 3月16日
Jackie Baek,Vivek F. Farias,Andreea Georgescu,Retsef Levi,Tianyi Peng,Deeksha Sinha,Joshua Wilde,Andrew Zheng
0+阅读 · 3月16日
Ricky T. Q. Chen,Yulia Rubanova,Jesse Bettencourt,David Duvenaud
5+阅读 · 2018年10月3日

37+阅读 · 2020年12月14日

43+阅读 · 2020年7月26日

68+阅读 · 2020年5月15日

152+阅读 · 2020年4月19日

47+阅读 · 2019年10月10日

54+阅读 · 2019年10月9日

CreateAMind
8+阅读 · 2019年5月18日

10+阅读 · 2019年3月6日

24+阅读 · 2017年11月16日
Top