Abstract
Stein (1956), in his seminal paper, came with the surprising discovery that the sample mean is an inadmissible estimator of the population mean in three or higher dimensions under squared error loss. The past five decades have witnessed multiple extensions and variations of Stein's results. Extension of Stein's results to prediction problems is of more recent origin, beginning with Komaki (2001) and George et al. (2006) and Ghosh et al. (2006). The present article shows how both the estimation and prediction problems go hand in hand under certain “intrinsic losses”, which includes both the Kullback-Leibler and Bhattacharyya-Hellinger divergence losses. The estimators dominating the sample mean under such losses are motivated both from the Bayesian and empirical Bayes point of view.
Keywords
Get full access to this article
View all access options for this article.
