Yun Yang - Colloquium Speaker
Variational inference is a widely-used tool for approximating complicated probability densities, especially those arising as posterior distributions from complex hierarchical Bayesian models. In the first part of this talk, we focus on theoretical aspects of variational approximation as a computationally-efficient procedure for approximate Bayesian inference. Operating in a frequentist setup, our theoretical development implies that point estimates constructed from variational procedures are consistent, and converge at an optimal rate to the true parameter in a wide range of problems. However, on the negative side, it is a well-known fact that variational procedures tend to underestimate the variability of the target posterior distributions, leading to incorrect uncertainty quantification.
In the second part of the talk, we develop a new inferential framework for uncertainty quantification in the widely-used mean-field variational approximation. Based on a careful non-asymptotic analysis, we show that the center of the mean-field approximation to the target posterior matches the maximum likelihood estimator (MLE) up to higher-order terms in a wide class of regular parametric models involving latent variables. Consequently, there is essentially no loss of efficiency in using this center as a point estimator as opposed to the MLE. We also propose a new class of variational weighted likelihood bootstrap (VWLB) methods for quantifying the uncertainty in the mean-field approximation. Alternatively, the proposed VWLB can be viewed as a new sampling scheme for producing independent samples that approximates the posterior. Comparing with traditional sampling algorithms such as Markov Chain Monte Carlo, VWLB can be implemented in parallel and is free of tuning.