CMI Seminar: Oscar Leong
Deep generative models, such as Generative Adversarial Networks (GANs), have quickly become the state-of-the-art for natural image generation, leading to new approaches for enforcing structural priors in a variety of inverse problems. In contrast to traditional approaches enforcing sparsity, GANs provide a low-dimensional parameterization of the natural signal manifold, allowing for signal recovery to be posed as a direct optimization problem over this low-dimensional space. In this talk, we will discuss some recent results in enforcing GAN priors in two inverse problems that permeate the imaging sciences: compressive sensing and phase retrieval. Rigorous recovery guarantees for both problems are achieved when the number of measurements is on the order of the dimension of the GAN's latent space. This matches traditional theory for compressive sensing under a sparsity prior and overcomes a notorious theoretical bottleneck in phase retrieval, where the best-known efficient algorithms under sparsity assumptions exhibit a sub-optimal quadratic sample complexity. One issue with GAN priors, however, is that they are prone to representation error due to architectural choices, biases in the training set, or mode collapse. We will also discuss a recent approach to overcome dataset bias and representation error through the use of Invertible Neural Networks as natural image priors, which have no representation error by architectural design. We show that such invertible models can yield higher accuracy for both in- and out-of-distribution images than GAN priors in linear inverse problems and establish theoretical bounds on expected recovery error in the case of a linear invertible model.