Zoubin Ghahramani, Center for Automated Learning and Discovery, CMU
Abstract
For the second view, I will show that for a variety of models it is possible to do efficient inference even with an infinite number of parameters. Once the infinitely many parameters are integrated out the model essentially becomes "nonparametric". I will describe tractable learning in Gaussian Processes (which can be thought of as infinite neural networks), infinite mixtures of Gaussians, infinite-state hidden Markov models, and infinite mixtures of experts. I will discuss pros and cons of both views and how they can be reconciled.
Joint work with Carl E Rasmussen and Matthew J Beal.
Charles Rosenberg Last modified: Tue Mar 12 18:00:56 EST 2002