2017 Hogg and Craig Lecturer is Dr. Xiao-Li Meng

Dean of the Graduate School of Arts and Sciences, Whipple V. N. Jones Professor of Statistics, Harvard University
Date: 
Wednesday, March 29, 2017 to Thursday, March 30, 2017

Dr. Xiao-Li Meng from Harvard University will be our 45th Hogg and Craig Lecturer.  

Early in the 1969-70 academic year, Professor Allen T. Craig announced his retirement. He gave a retirement talk in January 1970. Under the leadership of Craig’s student and co-author, Professor Robert V. Hogg, the department decided to establish a lecture series to honor Professor Craig. His January 1970 talk was the first in this series. When Professor Hogg passed away at the age of 90 in 2014, the department decided to incorporate his name into the lecture series.

https://stat.uiowa.edu/hogg-and-craig-lectures 

Tentative schedule:

Wednesday, March 29

1:30 p.m. Lecture #1  W307 Pappajohn Business Building (PBB)

From Euler to Clinton: An Unexpected Statistical Journey (Or: Size Does Matter, But You Might Be in for a Surprise…)

The phrase “Big Data” has greatly raised expectations of what we can learn about ourselves and the world in which we live or will live. It appears to have also boosted general trust in empirical findings, because it seems to be common sense that the more data, the more reliable are our results. Unfortunately, this common sense conception can be falsified mathematically even for methods such as the time-honored ordinary least squares regressions (Meng and Xie, 2014). Furthermore, whereas the size of data is a common indicator of the amount of information, what matters far more is the quality of data. A largely overlooked statistical identity, a potential candidate for the statistical counterpart to the beautiful Euler identity, reveals that trading quantity for quality in statistical estimation is a mathematically demonstrable doomed game (Meng, 2017). Without taking into account the data quality, Big Data can do more harm than good because of the drastically inflated precision assessment, and hence the gross overconfidence, which minimally can give us serious surprises when the reality unfolds, as illustrated by the 2016 U.S. election.

References:

Meng, X.-L. and Xie, X. (2014). I Got More Data, My Model Is More Refined, But My Estimator Is Getting Worse! Am I Just Dumb? Econometric Reviews 33: 218-250. (Preprint available at http://www.stat.harvard.edu/Faculty_Content/Meng-cv.pdf)

Meng, X.-L.  (2017)  Statistical Paradises and Paradoxes in Big Data (I): The Bigger the Data, the Surer We Miss Our Target?  In preparation.

2:45 p.m. Hogg & Craig Cake in 302 Schaeffer Hall (Student Awards presented at 3:00 p.m.)

 

Thursday, March 30

2:45 p.m. Reception in 241 Schaeffer Hall

3:30 p.m. Lecture #2 in LR2 Van Allen Hall

Bayesian, Fiducial, and Frequentist (BFF): Best Friends Forever?

Among paradigms of statistical inferences, Bayesian and Frequentist are most popular, with Fiducial approach being the most controversial. However, there is essentially only one scientifically acceptable way of evaluating any inference method: show me how it performs across replications. And hence the great debate in statistics: Which replications best help us predict real world uncertainty? This unified mode of evaluation provides a prism to reveal the whole spectrum of probabilistic inference foundations. In the familiar Data-Parameter space, the standard Frequentist’s replications fix the parameter at the unknown “true” value  and let the data replicate, whereas the Bayesian goes to the other extreme by fixing the data at the observed and letting the parameter vary. The Frequentist thus pays the price of relevance: a method which works on average may not be relevant for the data at hand, just as that a treatment which works for a good percentage of a population may not work for me. In contrast, the Bayesian pays the price of robustness: results are sensitive to prior assumptions about how the parameter varies. Fiducial represents one of many possible compromises one can obtain by sliding a ruler along the relevance-robustness spectrum, but it suffers from an incoherent treatment of the Data. Realizing that the differences in inference amount to different choices of replications and there is no one size fits all, Bayes, Fiducial and Frequentism can all thrive under one roof as BFFs (Best Friends Forever) — only united can we combat the Big Data tsunami.

[A main part of this talk is based on Liu, K. and Meng, X.-L. (2016). "There is individualized treatment. Why not individualized inference?" Annual Review of Statistics and Its Application, V3, 79-111. Available at http://www.annualreviews.org/doi/full/10.1146/annurev-statistics-010814-020310 ]