Congratulations to our PhD student, Shiao Liu for successfully presenting his dissertation via Zoom!

Friday, April 14, 2023 - 6:00pm

Congratulations Shiao!

Error Analyses of Deep Generative Models


As powerful unsupervised deep learning methods for learning and sampling from complex data distributions, Generative Adversarial Networks (GANs) have achieved remarkable success in many machine learning tasks such as image synthesis, video prediction, and natural language generation. However, theoretical explanations for their empirical success are not well established.

This work studies the convergence rates of GANs and the variants under a collection of integral probability metrics defined through Lipschitz classes, including the Wasserstein distance. We show that GANs and the variants are able to adaptively learn data distributions with low-dimensional structures when the neural network architectures are specified properly. In particular, for distributions concentrated around a low-dimensional manifold, we show that the learning rates of GANs and the variants do not depend on the high ambient dimension, but on the lower intrinsic dimension of the manifold, which explains why GANs are able to mitigate the curse of dimensionality in the learning process. The analyses are based on new oracle inequalities decomposing the estimation error into approximation and statistical errors.

Committee Chair: Jian Huang

Committee Members:  Aixin Tan, Boxiang Wang, N.D. Shymalkumar, Tianboa Yang (Computer Science)