Explainable Artificial Intelligence
Principal lecturer: Prof Mateja Jamnik
Additional lecturers: Mateo Espinosa Zarlenga, Dr Zohreh Shams
Taken by: MPhil ACS, Part III
Code: L193
Term: Lent
Hours: 16 (6hrs lectures; 6 hrs presentations;4hrs practicals)
Format: In-person lectures
Class limit: max. 20 students
Prerequisites: A solid background in statistics, calculus and linear algebra. We strongly recommend some experience with machine learning and deep neural networks (to the level of the first chapters of Goodfellow et al.’s “Deep Learning”). Students are expected to be comfortable reading and writing Python code for the module’s practical sessions.
Moodle, timetable
Aims
The recent palpable introduction of Artificial Intelligence (AI) models to everyday consumer-facing products, services, and tools brings forth several new technical challenges and ethical considerations. Amongst these is the fact that most of these models are driven by Deep Neural Networks (DNNs), models that, although extremely expressive and useful, are notoriously complex and opaque. This “black-box” nature of DNNs limits their ability to be successfully deployed in critical scenarios such as those in healthcare and law. Explainable Artificial Intelligence (XAI) is a fast-moving subfield of AI that aims to circumvent this crucial limitation of DNNs by either
(i) constructing human-understandable explanations for their predictions,
or (ii) designing novel neural architectures that are interpretable by construction.
In this module, we will introduce the key ideas behind XAI methods and discuss some of the important application areas of these methods (e.g., healthcare, scientific discovery, debugging, model auditing, etc.). We will approach this by focusing on the nature of what constitutes an explanation, and discussing different ways in which explanations can be constructed or learnt to be generated as a by-product of a model. The main aim of this module is to introduce students to several commonly used approaches in XAI, both theoretically in lectures and through hands-on exercises in practicals, while also bringing recent promising directions within this field to their attention. We hope that, by the end of this module, students will be able to directly contribute to XAI research and will understand how methods discussed in this module may be powerful tools for their own work and research.
Syllabus
- Overview and taxonomy of XAI (why is explainability needed, definition of terms, taxonomy of the XAI space, etc.)
- Perturbation-based feature attribution (e.g., LIME, Anchors, SHAP, etc.)
- Propagation-based feature attribution methods (e.g., Relevance Propagation, Saliency methods, etc.)
- Concept-based explainability (Net2Vec, T-CAV, ACE, etc.)
- Interpretable architectures (CBMs and variants, Concept Whitening, SENNs, etc.)
- Neurosymbolic methods (DeepProbLog, Neural Reasoners, etc.)
- Sample-based Explanations (Influence functions, ProtoPNets, etc.)
- Counterfactual explanations
Proposed Schedule
The 16 hours of lectures across 8 weeks will be divided as
follows:
Week 1: 1h lecture + 1h lecture
Week 2: 1h lecture + 1h reading group &
presentations
Week 3: 1h lecture + 1h reading group &
presentations
Week 4: 2h practical
Week 5: 1h lecture + 1h reading group &
presentations
Week 6: 2h practical
Week 7: 1h lecture + 1h reading group &
presentations
Week 8: 1h reading group & presentation + 1h reading group
& presentations
In weeks where lectures are planned, one-hour lecture slots will intercalate with one hour of student paper presentations. During each paper presentation session, three students will present a paper related to the topic covered in the earlier lecture that week for about 10 minutes each + 5 minutes of questions. At the end of all paper presentations, there will be a discussion on all the papers. The order of student presentations will be allocated randomly during the first week so that students know in advance when they are expected to present. For the sake of fairness, we will release the paper to be presented by each student a week before their presentation slot.
Objectives
By the end of this module, students should be able to:
- Recognise and identify key concepts in XAI together with their connection to related subfields in AI such as fairness, accountability, and trustworthy AI.
- Understand how to use, design, and deploy model-agnostic perturbation methods such as LIME, Anchors, and RISE. In particular, students should understand the connection between feature importance and cooperative game theory, and its uses in methods such as SHAP.
- Identify the uses and limitations of propagation-based feature importance methods such as Saliency, SmoothGrad, GradCAM, and Integrated Gradients. Students should be able to implement each of these methods on their own and connect the theoretical ideas behind them to practical code, exploiting modern frameworks’ auto-differentiation.
- Understand what concept learning is and what limitations it overcomes compared to traditional feature-based methods. Specifically, students should understand how probing a DNN’s latent space may be exploited for learning useful concepts for explainability.
- Reason about the key components of inherently interpretable architectures and neuro-symbolic methods and understand how interpretable neural networks can be designed from first principles.
- Elaborate on what sample-based explanations are and how influence functions and prototypical architectures such as ProtoPNet can be used to construct such explanations.
- Explain what counterfactual explanations are, how they are related to causality, and under which conditions they may be useful.
Upon completion of this module, students will have the
technical background and tools to use XAI as part of their own
research or partake in XAI research itself. Moreover, we hope
that by detailing a clear timeline of how this relatively young
subfield has developed, students may be able to better understand
what are some fundamental open questions in this area and what
are some promising directions that are currently actively being
explored.
Assessment
(10%) student presentation: each student will be randomly assigned a presentation slot at the beginning of the course. We will then distribute a paper for them to present in their slot a week before the presentation’s scheduled time. Students are expected to prepare a 10-minute presentation of their assigned paper where they will present the motivation of their assigned work and discuss the main methodology and findings reported in that paper. We will encourage students to focus on fully communicating the intuition of the work in their assigned papers and try and connect it with ideas that we have previously discussed in previous lectures. All students not presenting each week will be asked to submit one question pertaining to each paper presented that week before the paper presentations.
(20%) practical exercises: We will run two practical sessions where students will be asked to perform a series of exercises that require them to use concepts we have introduced in lecture up to that point. For each practical session, we will prepare a colab notebook to guide the student through exercises and we will ask the students to submit their solutions through this colab notebook. We expect students to complete about ⅔ of the exercises in the practical class and complete the rest at home as homework. Each practical will be worth 10%.
(70%) mini-project: At the end of week 3, we will hand out a list of papers for students to select their mini-projects from. Each mini-project will consist of a student selecting a paper from our list and reimplementing and expanding the key idea in the paper. We encourage students to be as creative as they want with how they drive their mini-project once the paper has been selected. For example, they can reimplement the technique in the paper and combine it with methodologies from other works we discussed in lecture, or they can apply their paper’s methodology to a new domain, datasets, or setup, where the technique may offer interesting and potentially novel insights. We will ask all students to submit a report in a workshop format of up to 4,000 words. This report, due roughly a week after Lent term ends, should describe their methodology, experiments, and results. A crucial aspect of this report involves explaining the rationale behind different choices in methodology and experiments, as well as elaborating on the choices made and hypotheses tested throughout their mini-project (potentially showing a deep understanding of the work they are basing their mini-project on). To aid students with selecting their projects and making progress on them, we will hold regular office hours when students can come to discuss their progress and questions with us.
Recommended reading
Textbooks
* Christoph Molnar, “Interpretable Machine Learning”. (2022): https://christophm.github.io/interpretable-ml-book/
Online Courses and Tutorial
* Su-in Lee and Ian Cover, “CSEP 590B Explainable AI” University of Washington* Explaining Machine Learning Predictions: State-of-the-art, Challenges, and Opportunities
* On Explainable AI: From Theory to Motivation, Industrial Applications, XAI Coding & Engineering Practices - AAAI 2022 Turorial
Survey papers
* Arrieta, Alejandro Barredo, et al. "Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI." Information fusion 58 (2020): 82-115.
* Rudin, Cynthia, et al. "Interpretable machine learning: Fundamental principles and 10 grand challenges." Statistics Surveys 16 (2022): 1-85.