ATML: Deep generative modelling
This track is an introduction to deep generative modelling algorithms, where we understand that term as follows:
- A generative model is a probability distribution, or generative process, that is derived from data so as to approximate the distribution that produced the data.
- A deep generative model is one that uses deep neural networks to represent the generative process or its components.
- A deep generative modelling algorithm consists of: a choice of generative process, a family of distributions parametrised by neural networks to represent that process, and a learning algorithm to fit those networks' parameters to data.
This track will survey the main classes of deep generative modelling algorithms, particularly for high-dimensional data, and their applications.
Course information
Instructor: Nikolay Malkin; assistants: Kirill Tamogashev and Rajit Rajpal.
Lectures: Tuesdays 17:10-18:00, Anatomy Lecture Theatre.
Tutorials: Mondays 13:10-14:00, Mondays 14:10-15:00, and Wednesdays 13:10-14:00, Appleton Tower M2, starting in Week 3.
Recommended references: Probabilistic Machine Learning: An Introduction and Advanced Topics by Kevin P. Murphy. Chapter IV of the latter book concerns deep generative models.
Course schedule
Slides will be posted 24 hours in advance of each lecture.
| Week | Date | Topic | Slides |
|---|---|---|---|
| 1 | 13.01 | Introduction and overview | |
| 2 | 20.01 | Preliminaries: distribution approximation, latent variable models, probabilistic (Bayesian) inference | |
| 3 | 27.01 | Latent variable models as generative models: autoencoders, variational autoencoders, hierarchical models, and their evaluation | |
| 4 | 03.02 | Generative models with exact density evaluation: normalising flows, autoregressive models | |
| 5 | 10.02 | Adversarial objectives for generative models: generative adversarial networks, density-free evaluation of generative models | |
| Break | |||
| 6 | 24.02 | Downstream uses of generative models: representation learning, conditioning and control, improvement with human feedback | |
| 7 | 03.03 | Guest lecture by Zee Talat, “Evaluating generative AI” | |
| 8 | 10.03 | Diffusion and continuous-time models I: diffusion models as hierarchical VAEs, denoising objective | |
| 9 | 17.03 | Diffusion and continuous-time models II: score matching, stochastic differential equations | |
| 10 | 24.03 | Diffusion and continuous-time models III: flow matching, advanced topics | |
| 11 | 31.03 | Revision and advanced topics depending on interest: implicit (energy-based) models, discrete latent variable models and neurosymbolic methods, various applications, etc. | |
License
All rights reserved The University of Edinburgh