In this paper we consider a state-space model (SSM) parametrized by some parameter $\theta$, and our aim is to perform joint parameter and state inference. A simple idea to perform this task, which almost dates back to the origin of the Kalman filter, is to replace the static parameter $\theta$ by a Markov chain $(\theta_t)_{t\geq 0}$ on the parameter space and then to apply a standard filtering algorithm to the extended, or self-organized SSM. However, the practical implementation of this idea in a theoretically justified way has remained an open problem. In this paper we fill this gap by introducing various possible constructions of the Markov chain $(\theta_t)_{t\geq 0}$ that ensure the validity of the self-organized SSM (SO-SSM) for joint parameter and state inference. Notably, we show that theoretically valid SO-SSMs can be defined even if $\|\mathrm{Var}(\theta_{t}|\theta_{t-1})\|$ converges to 0 slowly as $t\rightarrow\infty$. This result is important since, as illustrated in our numerical experiments, such models can be efficiently approximated using standard particle filter algorithms. While the idea studied in this work was first introduced for online inference in SSMs, it has also been proved to be useful for computing the maximum likelihood estimator (MLE) of a given SSM, since iterated filtering algorithms can be seen as particle filters applied to SO-SSMs for which the target parameter value is the MLE of interest. Based on this observation, we also derive constructions of $(\theta_t)_{t\geq 0}$ and theoretical results tailored to these specific applications of SO-SSMs, and as a result, we introduce new iterated filtering algorithms. From a practical point of view, the algorithms introduced in this work have the merit of being simple to implement and only requiring minimal tuning to perform well.
翻译:暂无翻译