skip to primary navigationskip to content

Department of Computer Science and Technology

Masters

 

Course pages 2022–23

Machine Learning for Language Processing

Principal lecturer: Dr Andreas Vlachos
Taken by: MPhil ACS, Part III
Code: L101
Term: Michaelmas
Hours: 16 (8 lectures + 8 seminar sessions)
Class limit: max. 16 students
Prerequisites: L90 Overview of Natural Language Processing (or similar) AND L95 Introduction to Natural Language Syntax and Parsing. THese two modules may be taking concurrently with this module to meet the prerequisites
Moodle, timetable

Aims

This module aims to provide an introduction to machine learning with specific application to tasks such as document classification, spam email filtering, language modelling, part-of-speech tagging, and named entity and event recognition for textual information extraction. We will cover supervised, weakly-supervised and unsupervised approaches using generative and discriminative linear and non-linear classifiers, such as Naive Bayes, Perceptron, Multi-Layer Perceptron, Logistic Regression, clustering / dimensionality-reduction methods, such as latent Dirichlet allocation and neural word embeddings.

Syllabus

Classification by machine learning: classification, types of classifier, generative vs. discriminative models, (un-/semi-)supervised training.

Document Classification: by topic, sentiment, spam content, etc, bag-of-words, word embeddings, feature selection / induction.

Structured prediction: sequence tagging, graph parsing, incremental language generation with recurrent neural networks.

Objectives

On completion of this module, students should:

  • understand the issues involved in applying machine learning approaches to a range of language processing applications;
  • understand the theory underlying a number of machine learning approaches that have been applied to language processing, including: Naive Bayes, Perceptron, Logistic Regression, and Multi-Layer Perceptron;
  • understand some applications and specific tasks including: document topic classification and clustering, SPAM filtering, PoS tagging, named entity recognition, event extraction, language modelling and word embeddings.

Coursework

Students will be expected to undertake reading for assigned lectures and seminars. Each student will give a 20 minute presentation of one paper.

Assessment

  • Students will receive one tick worth 5% for attendance at seminar sessions, reading of assigned material, and satisfactory contribution during seminars.
  • Students will receive a second tick worth 5% for a satisfactory presentation of an assigned paper.
  • students will undertake a small project to be agreed with the lecturers and write a project report of not more than 5000 words. The report will be due around the beginning of the Lent Term (see academic calendar for precise date), will be assessed by the lecturers, and will account for 90% of the module marks.

Recommended reading

Bishop, C. (2006). Pattern recognition and machine learning. Springer. (Chaps: 1, 2, 4-9, 13).

Jurafsky, D. & Martin, J. (2008). Speech and language processing. Prentice Hall (2nd ed.). (Chaps: 4-6, 22)(see also 3rd ed. draft, online).

Manning, C., Raghavan, P. & Schutze, H. (2008). Introduction to information retrieval. Cambridge University Press. (Chaps: 12-18).

Goodfellow et al, DL (Chaps 6-12).

Further Information

Due to infectious respiratory diseases, the method of teaching for this module may be adjusted to cater for physical distancing and students who are working remotely. Unless otherwise advised, this module will be taught in person.