Estimation, Prediction and the Stein Phenomenon under Divergence Loss

Series
Stochastics Seminar
Time
Thursday, November 12, 2009 - 3:00pm for 1 hour (actually 50 minutes)
Location
Skiles 269
Speaker
Gauri Data – University of Georgia
Organizer
Liang Peng
We consider two problems: (1) estimate a normal mean under a general divergence loss introduced in Amari (1982) and Cressie and Read (1984) and (2) find a predictive density of a new observation drawn independently of the sampled observations from a normal distribution with the same mean but possibly with a different variance under the same loss. The general divergence loss includes as special cases both the Kullback-Leibler and Bhattacharyya-Hellinger losses. The sample mean, which is a Bayes estimator of the population mean under this loss and the improper uniform prior, is shown to be minimax in any arbitrary dimension. A counterpart of this result for predictive density is also proved in any arbitrary dimension. The admissibility of these rules holds in one dimension, and we conjecture that the result is true in two dimensions as well. However, the general Baranchik (1970) class of estimators, which includes the James-Stein estimator and the Strawderman (1971) class of estimators, dominates the sample mean in three or higher dimensions for the estimation problem. An analogous class of predictive densities is defined and any member of this class is shown to dominate the predictive density corresponding to a uniform prior in three or higher dimensions. For the prediction problem, in the special case of Kullback-Leibler loss, our results complement to a certain extent some of the recent important work of Komaki (2001) and George, Liang and Xu (2006). While our proposed approach produces a general class of predictive densities (not necessarily Bayes) dominating the predictive density under a uniform prior, George et al. (2006) produced a class of Bayes predictors achieving a similar dominance. We show also that various modifications of the James-Stein estimator continue to dominate the sample mean, and by the duality of the estimation and predictive density results which we will show, similar results continue to hold for the prediction problem as well. This is a joint research with Professor Malay Ghosh and Dr. Victor Mergel.