skip to primary navigationskip to content

Department of Computer Science and Technology

Masters

 

Course pages 2025–26 (working draft)

Principles of AI-driven Neuroscience and Translational Biomedicine

Principal lecturers: Prof Pietro Lio', Michail Mamalakis, Dr Tiago Azevedo
Taken by: MPhil ACS, Part III
Code: L205
Term: Lent
Hours: 16 (8 x 2hrs lectures)
Format: In-person lectures
Class limit: max. 20 students
Prerequisites: Deep Learning (important), Machine learning principles (high important), basic of computer vision (important), basic of graph neural networks (important), basics of explainable AI (not compulsory), basics of geometric deep learning (not compulsory), Coding knowledge: Python-> libraries like: pytorch, numpy, panda etc.
timetable

Aims

The module Principles of AI-Driven Neuroscience and Translational Biomedicine aims to provide students with a comprehensive understanding of the interplay between selected AI models, such as convolutional neural networks (CNNs), transformers, graph neural networks and Agentic AI; and core concepts in brain anatomy, connectomics, and medical imaging. Particular emphasis is placed on exploring the reasoning behind the predictive power of AI models, highlighting techniques such as mechanistic and attributional interpretability, as well as logic and syllogistic reasoning in neuroscience, to identify and validate existing patterns and potentially push the boundaries of current knowledge. 

By the end of the module, students will be familiar with classification tasks involving real-world examples of brain pathologies, including psychosis, brain cancer, and neurodegenerative disorders. They will gain a strong grasp of cutting-edge concepts such as mechanistic and attributional interpretability, along with logic and syllogistic reasoning and Agentic AI. Finally, students will explore how AI can drive neuroscience forward by uncovering novel patterns and enhancing our understanding of brain function and structure.  

The course includes the following eight lectures (2h) covering foundational topics:

  1. Introduction to Basic Brain Anatomy, Connectomes and Neuroimaging
  2. Fundamental Deep Learning and Classification Structures in Neuroscience
  3. Cognitive and Agentic AI Foundations
  4. Principles of Logic Reasoning and Syllogistic Reasoning in Neuroscience
  5. Attributional Interpretability Approaches in Medical Large Language Models (m-LLM) and Neuroimaging
  6. Principles of Mechanistic Interpretability Approaches in Medical Large Language Models (m-LLM) and Neurobiology
  7. Applications of Explainable Artificial Intelligence in Neuroscience
  8. Clinical neuroscience perspective

Syllabus

Lecture 1: Introduction to Basic Brain Anatomy, Connectomes and Neuroimaging
The aim of this lecture is to introduce basic brain anatomy, focusing on different lobes, sulci regions, and the brain’s folding patterns. Additionally, it will introduce the fundamental characteristics of various medical imaging modalities, with an emphasis on anatomical and functional MRI. The lecture will also present methods for extracting connectome information [25] using imaging techniques such as functional and structural MRI. [1,2] 

Lecture 2: Fundamental Deep Learning and Classification Structures in Neuroscience
This lecture will briefly recap the fundamentals of deep learning architectures and layers, such as MLPs, transformers, attention layers, and CNN blocks. It will also present state-of-the-art classifier architectures like ViT and introduce classification problems in neuroscience [3,4]. Moreover, in this lecture, we will introduce the fundamental concepts of graph representation in neuroscience, focusing on how connectomics [21] can be used to model brain imaging data by using nodes as brain regions and edges as connections. Additionally, we will discuss Graph Neural Networks (GNNs), covering essential architectures such as GCNs [22] and GATs [23]. 

Lecture 3: Cognitive and Agentic AI Foundations
This lecture introduces the emerging field of agentic artificial intelligence, emphasising cognitive foundations, autonomous decision-making, and adaptive reasoning within neuroscience contexts. Drawing from cognitive frameworks outlined by Vemula [7b], students will explore how cognitive architectures inspire the development of autonomous agents capable of complex tasks relevant to neuroscience. Additionally, we will illustrate how integrating knowledge graphs, reasoning processes, and agentic behaviours can enhance AI-driven analyses and clinical decision-making [8b]. The discussion will include practical considerations for developing robust agentic AI models that exhibit both interpretability and autonomy in medical and neuroscientific applications. 

Lecture 4: Principles of Logic Reasoning and Syllogistic Reasoning in Neuroscience
This lecture introduces the micro-world of human rationality and syllogistic reasoning [26–27], illustrated by classical logical structures such as: All Greeks are humans; all humans are mortal; therefore, all Greeks are mortal. The concept of deterministic neural logical reasoning is presented from a neuroscience perspective, emphasizing spatial reasoning as a foundation for domain-general reasoning [28–30], where reasoning is understood as a process of mental model construction and inspection [31]. In addition, the lecture demonstrates how domain-general syllogistic reasoning can emerge from spatial reasoning processes. It introduces the sphere neural network within a set-theoretic neural architecture, which enables symbolic-level syllogistic reasoning in vector space, achieved without reliance on training data [32]. 

Lecture 5: Attributional Interpretability Approaches in Medical Large Language Models (m-LLM) and Neuroimaging
This session will highlight various Explainable AI (XAI) techniques in attributional interpretability, such as GradCam, SHAP and GNNExplainer. [9-11, 24]. This lecture will highlight how we can use the XAI to verify patterns and potentially identify new biomarkers [16,17,26] in Large Language Models, Transformers and GNNs in medical applications and neuroimaging. 

Lecture 6: Principles of Mechanistic Interpretability Approaches in Medical Large Language Models (m-LLM) and Neurobiology
This lecture will cover key terminology related to superposition, polysemantic representations, and the privileged basis. Additionally, it will address the problem of mechanistic interpretability and explore how the sparse autoencoder attempts to provide explanations for various deep learning applications in the medical domain, such as clinical assignment (e.g., medical large language models, m-LLMs) and neurobiology (e.g., multi-omics data, protein language models, PLM) [12-14,18, 33, 34]. 

Lecture 7: Applications of Explainable Artificial Intelligence in Neuroscience
This session will provide a brief application of all previous lectures, focusing on real-world implementations in neuroscience. It will cover tasks such as classification [15,20], XAI [16,17], logic reasoning, and mechanistic interpretability [18]. Additionally, we will discuss the impact of AI in neuroscience and the importance of responsible AI practices that align with ethical considerations in medical applications. 

Lecture 8: Clinical neuroscience perspective
This session will primarily focus on exploring real-world challenges and applications of AI in clinical practice, providing students with insights into the perspective of healthcare professionals. To enhance their understanding, we plan to invite experts in the clinical field, (like Professor Richard Bethlehem, https://neuroscience.cam.ac.uk/member/rb643/ , Dr. Mate Aller, https://www.mrc-cbu.cam.ac.uk/people/Mate.Aller/ ), to discuss the integration of AI in hospital settings and its impact on clinical decision-making.

Learning outcomes

This module is designed not only to equip students with technical expertise in AI-driven neuroscience but also to instil a strong ethical and philosophical perspective on AI’s role in medical and societal contexts. Students will develop practical skills in applying state-of-the-art AI networks and interpretability and reasoning techniques, such as mechanistic and attributional interpretability, as well as logic and syllogistic reasoning, to real-world problems in neuroscience; enabling them to critically assess and refine AI methodologies. By the end of the module, students will be able to:

  • Apply graph neural networks, CNNs, transformers and Agentic AI to neuroscience-related tasks. 
  • Interpret AI-driven models in medical imaging and connectomics, with a focus on interpretability, reasoning, trustworthiness, and novel pattern discovery.
  • Conduct basic analysis of brain pathologies, including psychotic disorders, neurodegenerative diseases, and brain cancer, using AI-based classification techniques.
  • Evaluate the impact of AI in neuroscience and engage in responsible AI practices aligned with ethical considerations in medical applications. 

Through a combination of lectures and hands-on activities, students will gain the skills needed to contribute meaningfully to the advancement of AI in neuroscience research.

Assessment

- (85%) Group Mini-project (report writeup) at the end of the course. The projects will be developed in pairs, and the students can either self-propose their own ideas or express their preferences for one of the provided topics announced at the start of term. The writeup report will have a limit of 4,000 words in line with other modules. The groups will consist of pairs of two individuals. To ensure a fair evaluation of individual contributions, each student will be asked to submit a brief statement at the end of the mini-project outlining their teammate’s contributions. This will provide insight into individual effort, team dynamics, and the overall quality of collaboration. Additionally, each submitted mini-project will include a section specifying the contributions of each member, which can be cross-referenced with the individual emails.

- (15%) Short presentation and viva. Students will deliver a brief presentation outlining their individual contributions to the mini project, followed by a short viva.

Assessment criteria for writeup report will follow project assessment criteria here: https://www.cl.cam.ac.uk/teaching/exams/acs_project_marking.pdf

Recommended reading material and resources

Theory: 

[1]https://www.sciencedirect.com/science/article/pii/S1053811913002656
[2] https://pmc.ncbi.nlm.nih.gov/articles/PMC1239902/ 
[3] https://arxiv.org/abs/2305.09880 
[4] https://arxiv.org/abs/2010.11929 
[5] https://arxiv.org/abs/1801.10130 
[6] https://arxiv.org/abs/2105.13926 
[7] https://arxiv.org/abs/1910.12892 
[8] https://arxiv.org/abs/2303.15919 
[9] https://arxiv.org/abs/1602.04938 
[10] https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0130140 
[11] https://arxiv.org/abs/1705.07874 
[12] https://arxiv.org/abs/1610.02391 
[13] https://arxiv.org/abs/2209.10652 
[14] https://arxiv.org/abs/2309.08600
[15] https://www.nature.com/articles/s43856-023-00313-w 
[16] https://arxiv.org/abs/2309.00903 
[17] https://arxiv.org/abs/2405.10008 
[18] https://www.biorxiv.org/content/10.1101/2024.11.14.623630v1 
[19] https://www.sciencedirect.com/science/article/pii/S1361841524000471 
[20] https://www.sciencedirect.com/science/article/pii/S1361841522001189
[21] https://www.sciencedirect.com/science/article/pii/S105381190901074X 
[22] https://arxiv.org/abs/1609.02907 
[23] https://arxiv.org/abs/1710.10903 
[24] https://arxiv.org/abs/1903.03894 
[25] 10.1016/j.neuroimage.2011.05.025
[26] https://doi.org/10.1109/CITRExCompanion65208.2025.10981502 
[27] S. Khemlani, P. N. Johnson-Laird (2012), Theories of the syllogism: A meta-analysis, Psychological Bulletin 138 (3) 427–457. 
[28] S. Ferrigno, Y. Huang, J. F. Cantlon (2021), Reasoning Through the Disjunctive Syllogism in Monkeys, Psychological Science 32 (2) 1–9. 
[29] J. L. S. Bellmund, P. Gaerdenfors, E. I. Moser, C. F. Doeller (2018), Navigating cognition: Spatial codes for human thinking, Science 362 (6415). 
[30] K. L. Alfred, A. C. Connolly, J. S. Cetron, D. J. M. Kraemer (2020), Mental models use common neural spatial structure for spatial and abstract content, Communications Biology 3 (1). 
[31] M. Ragni, M. Knauff (2013), A theory and a computational model of spatial reasoning with preferred mental models, Psychological review 120:561–588. 
[32] T. Dong, M. Jamnik, P. Liò. (2025). Neural Reasoning for Sure Through Constructing Explainable Models. Proceedings of the AAAI Conference on Artificial Intelligence, 39(11), 11598-11606. 
[33] https://doi.org/10.1109/BIBM62325.2024.10822695 
[34] https://doi.org/10.1109/BIBM62325.2024.10821894

Books: 

[1b] "Deep Learning", by Ian Goodfellow, Yoshua Bengio and Aaron Courville. 
[2b] “Graph Representation Learning Book", by William L. Hamilton. 
[3b] “Graph Neural Networks: Foundations, Frontiers, and Applications”, by Lingfei Wu, Peng Cui, Jian Pei, and Liang Zhao. 
[4b] “Changing Connectomes: Evolution, Development, and Dynamics in Network Neuroscience.” by Kaiser, M. (2020). MIT Press. ISBN: 978-0262044615 
[5b] F. Rosenblatt (1962), Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms, Spartan Books, Washington, USA. 
[6b] B. Tversky (2019), Mind in Motion, Basic Books, New York, USA. 
[7b] "Cognitive Foundations of Agentic AI - From Theory to Practice" by Anand Vemula 
[8b] "Agentic Graph RAG - Integrating Knowledge Graphs, Reasoning, and Agency for Enterprise AI" by Anthony Alcaraz 

Coding: 

[1] https://pytorch-geometric.readthedocs.io/en/latest/ 
[2] https://captum.ai/ 
[3] https://pytorch.org/hub/huggingface_pytorch-transformers/ 
[4] https://huggingface.co/ 
[5] https://transformerlensorg.github.io/TransformerLens/ 
[6] https://sites.google.com/site/bctnet/