Computing and Mathematical Sciences Colloquium

Monday May 11, 2015 4:00 PM

A Probabilistic Theory of Deep Learning

Speaker: Professor Richard G. Baraniuk, Electrical and Computer Engineering, Founder and Director, OpenStax , Rice University
Location: Annenberg 105
A grand challenge in machine learning is the development of computational algorithms that match or outperform humans in perceptual inference tasks that are complicated by nuisance variation. For instance, visual object recognition involves the unknown object position, orientation, and scale in object recognition while speech recognition involves the unknown voice pronunciation, pitch, and speed. Recently, a new breed of deep learning algorithms have emerged for high-nuisance inference tasks that routinely yield pattern recognition systems with near- or super-human capabilities. But a fundamental question remains: Why do they work? Intuitions abound, but a coherent framework for understanding, analyzing, and synthesizing deep learning architectures has remained elusive. We answer this question by developing a new probabilistic framework for deep learning based on the Deep Rendering Model: a generative probabilistic model that explicitly captures latent nuisance variation. By relaxing the generative model to a discriminative one, we can recover two of the current leading deep learning systems, deep convolutional neural networks and random decision forests, providing insights into their successes and shortcomings, a principled route to their improvement, and new avenues for exploration.
Series Computing and Mathematical Sciences Colloquium Series

Contact: Carmen Nemer-Sirois at (626) 395-4561 carmens@caltech.edu