We study the identifiability of parameters and falsifiability of predictions under the process of model expansion in a Bayesian setting. Identifiability is represented by the closeness of the posterior to the prior distribution and falsifiability by the power of posterior predictive tests against alternatives. To study these two concepts formally, we develop information-theoretic proxies, which we term the identifiability and falsifiability mutual information. We argue that these are useful indicators, with lower values indicating a risk of poor parameter inference and underpowered model checks, respectively. Our main result establishes that a sufficiently complex expansion of a base statistical model forces a trade-off between these two mutual information quantities -- at least one of the two must decrease relative to the base model. We illustrate our result in three worked examples and extract implications for model expansion in practice. In particular, we show as an implication of our result that the negative impacts of model expansion can be limited by offsetting complexity in the likelihood with sufficiently constraining prior distributions.
翻译:暂无翻译