This paper presents the first general (supervised) statistical learning framework for point processes in general spaces. Our approach is based on the combination of two new concepts, which we define in the paper: i) bivariate innovations, which are measures of discrepancy/prediction-accuracy between two point processes, and ii) point process cross-validation (CV), which we here define through point process thinning. The general idea is to carry out the fitting by predicting CV-generated validation sets using the corresponding training sets; the prediction error, which we minimise, is measured by means of bivariate innovations. Having established various theoretical properties of our bivariate innovations, we study in detail the case where the CV procedure is obtained through independent thinning and we apply our statistical learning methodology to three typical spatial statistical settings, namely parametric intensity estimation, non-parametric intensity estimation and Papangelou conditional intensity fitting. Aside from deriving theoretical properties related to these cases, in each of them we numerically show that our statistical learning approach outperforms the state of the art in terms of mean (integrated) squared error.

0
下载
关闭预览

相关内容

Processing 是一门开源编程语言和与之配套的集成开发环境(IDE)的名称。Processing 在电子艺术和视觉设计社区被用来教授编程基础,并运用于大量的新媒体和互动艺术作品中。

We consider a sequence of variables having multinomial distribution with the number of trials corresponding to these variables being large and possibly different. The multinomial probabilities of the categories are assumed to vary randomly depending on batches. The proposed framework is interesting from the perspective of various applications in practice such as predicting the winner of an election, forecasting the market share of different brands etc. In this work, first we derive sufficient conditions of asymptotic normality of the estimates of the multinomial cell probabilities, and corresponding suitable transformations. Then, we consider a Bayesian setting to implement our model. We consider hierarchical priors using multivariate normal and inverse Wishart distributions, and establish the posterior consistency. Based on this result and following appropriate Gibbs sampling algorithms, we can infer about aggregate data. The methodology is illustrated in detail with two real life applications, in the contexts of political election and sales forecasting. Additional insights of effectiveness are also derived through a simulation study.

0
0
下载
预览

The missing data issue is ubiquitous in health studies. Variable selection in the presence of both missing covariates and outcomes is an important statistical research topic but has been less studied. Existing literature focuses on parametric regression techniques that provide direct parameter estimates of the regression model. In practice, parametric regression models are often sub-optimal for variable selection because they are susceptible to misspecification. Machine learning methods considerably weaken the parametric assumptions and increase modeling flexibility, but do not provide as naturally defined variable importance measure as the covariate effect native to parametric models. We investigate a general variable selection approach when both the covariates and outcomes can be missing at random and have general missing data patterns. This approach exploits the flexibility of machine learning modeling techniques and bootstrap imputation, which is amenable to nonparametric methods in which the covariate effects are not directly available. We conduct expansive simulations investigating the practical operating characteristics of the proposed variable selection approach, when combined with four tree-based machine learning methods, XGBoost, Random Forests, Bayesian Additive Regression Trees (BART) and Conditional Random Forests, and two commonly used parametric methods, lasso and backward stepwise selection. Numeric results show XGBoost and BART have the overall best performance across various settings. Guidance for choosing methods appropriate to the structure of the analysis data at hand are discussed. We further demonstrate the methods via a case study of risk factors for 3-year incidence of metabolic syndrome with data from the Study of Women's Health Across the Nation.

0
0
下载
预览

We consider the problem of detecting a general sparse mixture and obtain an explicit characterization of the phase transition under some conditions, generalizing the univariate results of Cai and Wu. Additionally, we provide a sufficient condition for the adaptive optimality of a Higher Criticism type testing statistic formulated by Gao and Ma. In the course of establishing these results, we offer a unified perspective through the large deviations theory. The phase transition and adaptive optimality we establish are direct consequences of the large deviation principle of the normalized log-likelihood ratios between the null and the signal distributions.

0
0
下载
预览

Recent advances in computational methods for intractable models have made network data increasingly amenable to statistical analysis. Exponential random graph models (ERGMs) emerged as one of the main families of models capable of capturing the complex dependence structure of network data in a wide range of applied contexts. The Bergm package for R has become a popular package to carry out Bayesian parameter inference, missing data imputation, model selection and goodness-of-fit diagnostics for ERGMs. Over the last few years, the package has been considerably improved in terms of efficiency by adopting some of the state-of-the-art Bayesian computational methods for doubly-intractable distributions. Recently, version 5 of the package has been made available on CRAN having undergone a substantial makeover, which has made it more accessible and easy to use for practitioners. New functions include data augmentation procedures based on the approximate exchange algorithm for dealing with missing data, adjusted pseudo-likelihood and pseudo-posterior procedures, which allow for fast approximate inference of the ERGM parameter posterior and model evidence for networks on several thousands nodes.

0
0
下载
预览

Qualitative and quantitative approaches to reasoning about uncertainty can lead to different logical systems for formalizing such reasoning, even when the language for expressing uncertainty is the same. In the case of reasoning about relative likelihood, with statements of the form $\varphi\succsim\psi$ expressing that $\varphi$ is at least as likely as $\psi$, a standard qualitative approach using preordered preferential structures yields a dramatically different logical system than a quantitative approach using probability measures. In fact, the standard preferential approach validates principles of reasoning that are incorrect from a probabilistic point of view. However, in this paper we show that a natural modification of the preferential approach yields exactly the same logical system as a probabilistic approach--not using single probability measures, but rather sets of probability measures. Thus, the same preferential structures used in the study of non-monotonic logics and belief revision may be used in the study of comparative probabilistic reasoning based on imprecise probabilities.

0
0
下载
预览

A composite likelihood is a non-genuine likelihood function that allows to make inference on limited aspects of a model, such as marginal or conditional distributions. Composite likelihoods are not proper likelihoods and need therefore calibration for their use in inference, from both a frequentist and a Bayesian perspective. The maximizer to the composite likelihood can serve as an estimator and its variance is assessed by means of a suitably defined sandwich matrix. In the Bayesian setting, the composite likelihood can be adjusted by means of magnitude and curvature methods. Magnitude methods imply raising the likelihood to a constant, while curvature methods imply evaluating the likelihood at a different point by translating, rescaling and rotating the parameter vector. Some authors argue that curvature methods are more reliable in general, but others proved that magnitude methods are sufficient to recover, for instance, the null distribution of a test statistic. We propose a simple calibration for the marginal posterior distribution of a scalar parameter of interest which is invariant to monotonic and smooth transformations. This can be enough for instance in medical statistics, where a single scalar effect measure is often the target.

0
0
下载
预览

Recent research has proposed neural architectures for solving combinatorial problems in structured output spaces. In many such problems, there may exist multiple solutions for a given input, e.g. a partially filled Sudoku puzzle may have many completions satisfying all constraints. Further, we are often interested in finding any one of the possible solutions, without any preference between them. Existing approaches completely ignore this solution multiplicity. In this paper, we argue that being oblivious to the presence of multiple solutions can severely hamper their training ability. Our contribution is two fold. First, we formally define the task of learning one-of-many solutions for combinatorial problems in structured output spaces, which is applicable for solving several problems of interest such as N-Queens, and Sudoku. Second, we present a generic learning framework that adapts an existing prediction network for a combinatorial problem to handle solution multiplicity. Our framework uses a selection module, whose goal is to dynamically determine, for every input, the solution that is most effective for training the network parameters in any given learning iteration. We propose an RL based approach to jointly train the selection module with the prediction network. Experiments on three different domains, and using two different prediction networks, demonstrate that our framework significantly improves the accuracy in our setting, obtaining up to 21 pt gain over the baselines.

0
0
下载
预览

The remarkable practical success of deep learning has revealed some major surprises from a theoretical perspective. In particular, simple gradient methods easily find near-optimal solutions to non-convex optimization problems, and despite giving a near-perfect fit to training data without any explicit effort to control model complexity, these methods exhibit excellent predictive accuracy. We conjecture that specific principles underlie these phenomena: that overparametrization allows gradient methods to find interpolating solutions, that these methods implicitly impose regularization, and that overparametrization leads to benign overfitting. We survey recent theoretical progress that provides examples illustrating these principles in simpler settings. We first review classical uniform convergence results and why they fall short of explaining aspects of the behavior of deep learning methods. We give examples of implicit regularization in simple settings, where gradient methods lead to minimal norm functions that perfectly fit the training data. Then we review prediction methods that exhibit benign overfitting, focusing on regression problems with quadratic loss. For these methods, we can decompose the prediction rule into a simple component that is useful for prediction and a spiky component that is useful for overfitting but, in a favorable setting, does not harm prediction accuracy. We focus specifically on the linear regime for neural networks, where the network can be approximated by a linear model. In this regime, we demonstrate the success of gradient flow, and we consider benign overfitting with two-layer networks, giving an exact asymptotic analysis that precisely demonstrates the impact of overparametrization. We conclude by highlighting the key challenges that arise in extending these insights to realistic deep learning settings.

0
12
下载
预览

This paper surveys the machine learning literature and presents machine learning as optimization models. Such models can benefit from the advancement of numerical optimization techniques which have already played a distinctive role in several machine learning settings. Particularly, mathematical optimization models are presented for commonly used machine learning approaches for regression, classification, clustering, and deep neural networks as well new emerging applications in machine teaching and empirical model learning. The strengths and the shortcomings of these models are discussed and potential research directions are highlighted.

0
7
下载
预览

We propose an Active Learning approach to image segmentation that exploits geometric priors to streamline the annotation process. We demonstrate this for both background-foreground and multi-class segmentation tasks in 2D images and 3D image volumes. Our approach combines geometric smoothness priors in the image space with more traditional uncertainty measures to estimate which pixels or voxels are most in need of annotation. For multi-class settings, we additionally introduce two novel criteria for uncertainty. In the 3D case, we use the resulting uncertainty measure to show the annotator voxels lying on the same planar patch, which makes batch annotation much easier than if they were randomly distributed in the volume. The planar patch is found using a branch-and-bound algorithm that finds a patch with the most informative instances. We evaluate our approach on Electron Microscopy and Magnetic Resonance image volumes, as well as on regular images of horses and faces. We demonstrate a substantial performance increase over state-of-the-art approaches.

0
9
下载
预览
小贴士
相关论文
Soudeep Deb,Rishideep Roy,Shubhabrata Das
0+阅读 · 4月7日
Subhodh Kotekal
0+阅读 · 4月6日
Alberto Caimo,Lampros Bouranis,Robert Krause,Nial Friel
0+阅读 · 4月6日
Matthew Harrison-Trainor,Wesley H. Holliday,Thomas F. Icard III
0+阅读 · 4月6日
Michele Lambardi di San Miniato,Nicola Sartori
0+阅读 · 4月5日
Yatin Nandwani,Deepanshu Jindal, Mausam,Parag Singla
0+阅读 · 4月4日
Peter L. Bartlett,Andrea Montanari,Alexander Rakhlin
12+阅读 · 3月16日
Claudio Gambella,Bissan Ghaddar,Joe Naoum-Sawaya
7+阅读 · 2019年1月16日
Ksenia Konyushkova,Raphael Sznitman,Pascal Fua
9+阅读 · 2018年1月16日
相关VIP内容
相关资讯
Hierarchically Structured Meta-learning
CreateAMind
9+阅读 · 2019年5月22日
Transferring Knowledge across Learning Processes
CreateAMind
6+阅读 · 2019年5月18日
IEEE | DSC 2019诚邀稿件 (EI检索)
Call4Papers
6+阅读 · 2019年2月25日
无监督元学习表示学习
CreateAMind
20+阅读 · 2019年1月4日
Unsupervised Learning via Meta-Learning
CreateAMind
26+阅读 · 2019年1月3日
meta learning 17年:MAML SNAIL
CreateAMind
8+阅读 · 2019年1月2日
A Technical Overview of AI & ML in 2018 & Trends for 2019
待字闺中
10+阅读 · 2018年12月24日
Hierarchical Disentangled Representations
CreateAMind
3+阅读 · 2018年4月15日
【学习】Hierarchical Softmax
机器学习研究会
3+阅读 · 2017年8月6日
Andrew NG的新书《Machine Learning Yearning》
我爱机器学习
6+阅读 · 2016年12月7日
Top