skip to primary navigationskip to content
 

Course pages 2021–22

Probabilistic Machine Learning

Prerequistites and related courses.

  • The content in this course is advanced machine learning. If you are looking for an introduction to probabilistic machine learning, please see the IB Data Science course.
  • There are close links to L48: Machine Learning and the Physical World. The topics covered are complementary, and the philosophy is different. The L48 lectures on Gaussian processes and probabilistic inference are especially relevant.

Practical arrangements.

  • Auditing. If you wish to audit this course, all the lecture videos will be available online, as listed below, and there is no need to ask permission to use them. I regret that unregistered students are not permitted to attend in-person lectures this year.
  • Lectures and videos. For topic 1, lectures will be in-person in the Computer Lab, and videos and notes will also be posted below. For topics 2&3, details are to be confirmed.
  • Office hours will be in SS03, on Tuesdays starting 26 Oct, from 4–5pm. In exceptional cases, Thursdays 4–5pm will also be possible. You can also post questions to the Q&A forum on Moodle.

Topics covered

1. Probabilistic neural networks

Neural networks, from the perspective of probabilistic modelling. Topics covered: classifiers, generative models, recurrent networks, autoencoders, GANs.

  • In-person lectures in the Computer Lab from 11 Oct to 25 Oct (timetable)
  • Prerecorded videos are listed below. Where the material is the same as last year, the link goes to last year's video; where the material is new this year, I'll post a new video. I will not record live videos of this year's in-person lectures.
Notes for sections 1 and 2
Notes for section 3
Lecture 1 [slides]
Prerequisites — review section 1 of IB Data Science
1.1 Prediction accuracy, 1.2 Probabilistic learning, 1.3 PyTorch — video (20:27) nn.ipynb
Lecture 2 [slides]
Lecture 1 continued
Generative models — IB Data Science section 1.6
Lecture 3 [slides]
1.4 Recurrent neural networks — video (9:09)
1.5 Underfitting and overfitting — video (7:41)
Lecture 4 [slides]
2.1 KL divergence — video to come
2.2 Importance sampling — video (11:17)
2.3 Bounds — video to come
Lecture 5 [slides]
3.1 Generative neural networks — video (10:40)
3.2 Autoencoder in maths — video (15:15)
3.3 Autoencoder in practice — video (28:37)
Reading: Auto-Encoding Variational Bayes (Kingma, Welling, 2014), Importance Weighted Autoencoders (Burda, Grosse, Salakhutdinov, 2015)
Lecture 6 [slides]
Latent variable models for datasets
Lecture 7 [slides]

2. TrueSkill — Graphical models and Gibbs sampling

Bayesian modelling, applied to the problem of how to rank players in a tournament.

3. Models for document collections — LDA

Another application of Bayesian modelling

  • Delivered by the Engineering department, as above

Assessment

There are five pieces of coursework. Three are structured exercises designed to reinforce the lectures. Two are for an open-ended investigation of a topic that you chose from a small list, drawing on the main themes of the lecture course. Submission instructions are on Moodle.

You may also be interested in a structured exercise on Gaussian processes, that was used in an earlier version of this course.