Principles of AI-driven Neuroscience and Explainability
Principal lecturers: Prof Pietro Lio', Michail Mamalakis, Dr Tiago Azevedo
Taken by: MPhil ACS, Part III
Code: L205
Term: Lent
Hours: 16 (8 x 2hrs lectures)
Format: In-person lectures
Prerequisites: Deep Learning (high important), Machine learning principles (high important), basic of computer vision (important), basic of graph neural networks (important), basics of explainable AI (not compulsory), basics of geometric deep learning (not compulsory), Coding knowledge: Python-> libraries like: pytorch, numpy, panda etc.
timetable
Aims
This module aims to provide students with a comprehensive
understanding of graph neural networks, selected geometric deep
learning models, convolutional neural networks (CNNs),
transformers, and fundamental concepts of brain anatomy,
connectomes, and medical imaging. A particular emphasis is placed
on explainability and interpretability in the medical domain,
specifically neuroscience.
By the end of the module, students will be familiar with
classification tasks involving practical examples of brain
pathologies, such as psychotic, brain cancer and
neurodegenerative disorders. They will also explore how AI can
potentially drive neuroscience forward by identifying novel
patterns and enhancing our understanding of brain function and
anatomy.
The module combines lectures and some hands-on applications to
ensure both theoretical depth and practical experience covering
foundational topics such as:
- Introduction to Basic Brain Anatomy, Connectomes and Medical Imaging
- Introduction to Basic Deep Learning and Classification Structures in Neuroscience
- Graphs Neural Networks and Neuroscience
- Geometric Deep Learning Applied in Neuroscience
- Principles of Attributional Interpretability Methodologies
- Principles of Mechanistic Interpretability Approaches
- Applications of Explainable Artificial Intelligence in Neuroscience
- Clinical neuroscience perspective
Syllabus
Lecture 1: Introduction to Basic Brain Anatomy, Connectomes
and Medical Imaging
The aim of this lecture is to introduce basic brain anatomy,
focusing on different lobes, sulci regions, and the brain’s
folding patterns. Additionally, it will introduce the fundamental
characteristics of various medical imaging modalities, with an
emphasis on anatomical and functional MRI. The lecture will also
present methods for extracting connectome information [25] using
imaging techniques such as functional and structural MRI.
[1,2]
Lecture 2: Introduction to Basic Deep Learning and
Classification Structures in Neuroscience
This lecture will briefly recap the basics of deep learning
architectures and layers, such as MLPs, transformers, attention
layers, and CNN blocks. It will also present state-of-the-art
classifier architectures like ViT and introduce classification
problems in neuroscience. [3,4]
Lecture 3: Graphs Neural Networks and Neuroscience
This lecture will introduce the fundamental concepts of graph
representation in neuroscience, focusing on how connectomics [21]
can be used to model brain imaging data by using nodes as brain
regions and edges as connections. Additionally, we will discuss
Graph Neurla Networks (GNNs), covering essential architectures
such as GCNs [22] and GATs [23].
Lecture 4: Geometric Deep learning Applied in Neuroscience
This lecture will extend the previous concepts on graph neural
networks by covering a basic introduction to geometric deep
learning principles [6] and
hyperbolic space. The main aim is to discuss specific geometric
deep learning architectures that benefit from the spherical
representation of brain anatomy. These architectures include
spherical CNNs [5], GNNs [19], hyperbolic GNNs [7], and
hyperbolic CNNs [8].
Lecture 5: Principles of Attributional Interpretability
Methodologies
This session will highlight various Explainable AI (XAI)
techniques, focusing primarily on the principal methods used in
attributional interpretability, such as LIME, LRP, GradCam, SHAP
and GNNExplainer. [9-12, 24]
Lecture 6: Principles of Mechanistic Interpretability
Approaches
This lecture will cover key terminology related to superposition,
polysemantic representations, and the privileged basis.
Additionally, it will address the mechanistic interpretability
problem and explore how the sparse auto-encoder attempts to
provide explanations for various deep learning applications.
[13,14]
Lecture 7: Applications of Explainable Artificial Intelligence
in Neuroscience This session will provide a brief application of
all previous lectures, focusing on real-world implementations in
neuroscience. It will cover tasks such as classification [15,20],
XAI [16,17], and mechanistic interpretability [18]. Additionally,
we will discuss the impact of AI in neuroscience and the
importance of responsible AI practices that align with ethical
considerations in medical applications. Lecture 8: Clinical
neuroscience perspective This session will primarily focus on
exploring real-world challenges and applications of AI in
clinical practice, providing students with insights into the
perspective of healthcare professionals. To enhance their
understanding, we plan to invite an expert in the field, such as
Professor Murray Graham or Professor John Suckling, to discuss
the integration of AI in hospital settings and its impact on
clinical decision-making.
Learning outcomes
This module is designed not only to equip students with
technical expertise in AI-driven neuroscience but also to instil
a strong ethical and philosophical perspective on AI's role in
medical and societal contexts. Students will develop practical
skills in applying state of the art AI networks and explainable
AI (XAI) techniques to real-world problems in neuroscience,
enabling them to critically assess and refine AI
methodologies.
By the end of the module, students will be able to:
- Apply graph neural networks, geometric deep learning models, CNNs, and transformers to neuroscience-related classification tasks.
- Interpret AI-driven models in medical imaging and connectomes with a focus on explainability and trustworthiness.
- Basic analyse brain pathologies, such as psychotic, neurodegenerative disorders, and brain cancer, through AI-based classification techniques.
- Evaluate the impact of AI in neuroscience and engage in responsible AI practices that align with ethical considerations in medical applications.
Through a combination of lectures and hands-on applications,
students will gain the necessary skills to contribute to the
advancement of AI in neuroscience research.
Assessment
- (85%) Group Mini-project (report writeup) at the end of the course. The projects will be developed in pairs, and the students can either self-propose their own ideas or express their preferences for one of the provided topics announced at the start of term. The writeup report will have a limit of 4,000 words in line with other modules.
- (15%) Short presentation and viva. Students will deliver a brief presentation outlining their individual contributions to the mini project, followed by a short viva.
Assessment criteria for writeup report will follow project assessment criteria here: https://www.cl.cam.ac.uk/teaching/exams/acs_project_marking.pdf
Recommended reading material and resources
Theory:
[1]https://www.sciencedirect.com/science/article/pii/S1053811913002656
[2] https://pmc.ncbi.nlm.nih.gov/articles/PMC1239902/
[3] https://arxiv.org/abs/2305.09880
[4] https://arxiv.org/abs/2010.11929
[5] https://arxiv.org/abs/1801.10130
[6] https://arxiv.org/abs/2105.13926
[7] https://arxiv.org/abs/1910.12892
[8] https://arxiv.org/abs/2303.15919
[9] https://arxiv.org/abs/1602.04938
[10]
https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0130140
[11] https://arxiv.org/abs/1705.07874
[12] https://arxiv.org/abs/1610.02391
[13] https://arxiv.org/abs/2209.10652
[14] https://arxiv.org/abs/2309.08600
[15] https://www.nature.com/articles/s43856-023-00313-w
[16] https://arxiv.org/abs/2309.00903
[17] https://arxiv.org/abs/2405.10008
[18]
https://www.biorxiv.org/content/10.1101/2024.11.14.623630v1
[19]
https://www.sciencedirect.com/science/article/pii/S1361841524000471
[20]
https://www.sciencedirect.com/science/article/pii/S1361841522001189
[21]
https://www.sciencedirect.com/science/article/pii/S105381190901074X
[22] https://arxiv.org/abs/1609.02907
[23] https://arxiv.org/abs/1710.10903
[24] https://arxiv.org/abs/1903.03894
[25] 10.1016/j.neuroimage.2011.05.025
Books:
[1] "Deep Learning", by Ian Goodfellow, Yoshua Bengio and
Aaron Courville.
[2] “Graph Representation Learning Book", by William L.
Hamilton.
[3] “Graph Neural Networks: Foundations, Frontiers, and
Applications”, by Lingfei Wu, Peng Cui, Jian Pei, and Liang
Zhao.
[4] “Changing Connectomes: Evolution, Development, and Dynamics
in Network Neuroscience.” by Kaiser, M. (2020). MIT Press. ISBN:
978-0262044615
Coding:
[1]
https://pytorch-geometric.readthedocs.io/en/latest/
[2] https://captum.ai/
[3]
https://pytorch.org/hub/huggingface_pytorch-transformers/
[4] https://huggingface.co/
[5]
https://transformerlensorg.github.io/TransformerLens/
[6] https://sites.google.com/site/bctnet/