SeminarsModel uncertainty and validation
reads
William M. Briggs
2011-01-04
10:30:00 - 12:00:00
R440 , Astronomy and Mathematics Building
Models that pass quality checks using traditional measures of goodness of fit might not appear as impressive when examined predictively. Ordinary model diagnostics and quantifications of uncertainties miss aspects of performance that are most important to the model user. The first thing we'll do is define what it means to say a model makes a prediction, and then emphasize the importance of predictive inference. Calibration will be explained. Bayesian p-values and their competitors, predictive scores, will be contrasted. Since what makes a model good or bad is how useful it is, skill scores become necessary. Examples of commonly-used models will be given.