There is a growing interest in the so-called Bayesian Predictive Inference approach, which allows to perform Bayesian inference without specifying the likelihood and prior of the model, or the need of any MCMC. Instead, only a sequence of predictive distributions for the observations is required, and inference on the unknown estimand can be performed, cheaply in parallel, using bootstrap-type schemes. Understanding which classes of predictive distributions can be used within this framework, is still a key open question. We relax commonly used probabilistic assumptions on the observations, namely exchangeability and conditional identical distribution, and on their predictive distributions, being measure-valued martingales, by introducing the new class of Almost Conditional Identically Distributed (a.c.i.d.) random variables. This class assumes that the predictive distributions are measure-valued almost supermartingales, and is parametrized by a sequence of parameters $(\xi_n)_{n>0}$, which regulate the decay of conditional dependence among future observations. Under mild summability assumptions on $(\xi_n)_{n>0}$, the resulting sequence of observations is shown to be asymptotically exchangeable, hence amenable to Bayesian Predictive Inference techniques. A.c.i.d. random variables arise naturally in recursive algorithms, and include classic approaches in Statistics and Learning Theory, such as kernel estimators, and more novel ones, such as the parametric Bayesian bootstraps.
翻译:暂无翻译