On creating a model assessment tool independent of data size and estimating the U statistic variance

Series
Stochastics Seminar
Time
Thursday, February 12, 2009 - 3:00pm for 1 hour (actually 50 minutes)
Location
Skiles 269
Speaker
Jiawei Liu – Department of Mathematics & Statistics, Georgia State University
Organizer
Heinrich Matzinger
If viewed realistically, models under consideration are always false. A consequence of model falseness is that for every data generating mechanism, there exists a sample size at which the model failure will become obvious. There are occasions when one will still want to use a false model, provided that it gives a parsimonious and powerful description of the generating mechanism. We introduced a model credibility index, from the point of view that the model is false. The model credibility index is defined as the maximum sample size at which samples from the model and those from the true data generating mechanism are nearly indistinguishable. Estimating the model credibility index is under the framework of subsampling, where a large data set is treated as our population, subsamples are generated from the population and compared with the model using various sample sizes. Exploring the asymptotic properties of the model credibility index is associated with the problem of estimating variance of U statistics. An unbiased estimator and a simple fix-up are proposed to estimate the U statistic variance.