Abstract
In this paper, we bring to attention the problem of model selection with conflicting criteria in general and in annual reviews of seasonal adjustment in particular. Although partial concurrent seasonal adjustment and annual reviews are recommended by Eurostat, the problem of model selection in such reviews is seldom discussed in the literature, and our study is an attempt to fill this gap. In these reviews, revisions caused by model changes are very undesirable. The trade-off between different diagnostics, M- and Q-statistics, numbers of outliers, and revisions is hard to make to select the best model. In this study, a customary model selection procedure is described. Furthermore, we argue for using the manually chosen models as the “true” models, which makes it possible to employ a supervised machine learning-like approach to select weights for these diagnostics. It shows that this approach could work equally well as (if not better than) human statisticians, and thus facilitates an automatized procedure for model selection in such annual reviews. Although the approach has limitations as we describe, it is, to our best knowledge, the first study of its kind in the literature.
Get full access to this article
View all access options for this article.
