Lecturer: Dr John Daugman
Prerequisite courses: Continuous Mathematics, Probability
The aims of this course are to investigate how biological nervous systems
accomplish the goals of machine intelligence but while using radically
different strategies, architectures, and hardware; and to investigate how
artificial neural systems can be designed that try to emulate some of those
biological principles in the hope of capturing some of their performance.
- Natural versus artificial substrates of intelligence.
Comparison of the differences between biological and artificial
intelligence in terms of architectures, hardware, and strategies.
Levels of analysis; mechanism and explanation; philosophical issues.
Basic neural network architectures compared with rule-based or
symbolic approaches to learning and problem-solving.
- Neurobiological wetware: architecture and function of the brain.
Human brain architecture. Sensation and perception; learning and memory.
What we can learn from neurology of brain trauma; modular organisation
and specialisation of function. Aphasias, agnosias, apraxias.
How stochastic communications media, unreliable and randomly distributed
hardware, very slow and asynchronous clocking, and imprecise connectivity
blueprints, give us unrivalled performance in real-time tasks involving
perception, learning, and motor control.
- Neural processing and signalling.
Information content of neural signals. Spike generation processes.
Neural hardware for both processing and communications. Can the
mechanisms for neural processing and signalling be viably separated?
Biophysics of nerve cell membranes and differential ionic
permeability. Excitable membranes. Logical operators.
- Stochasticity in neural codes.
Principal Components Analysis of spike trains. Evidence for detailed
temporal modulation as a neural coding and communications strategy.
Is stochasticity also a fundamental neural computing strategy for
searching large solution spaces, entertaining candidate hypotheses
about patterns, and memory retrieval? John von Neumann's conjecture.
- Neural operators that encode, analyse, and represent image
How the mammalian visual system, from retina to brain, extracts information
from optical images and sequences of them to make sense of the world.
Description and modelling of neural operators in engineering terms as
filters, coders, compressors, and pattern matchers.
- Cognition and evolution. Neuropsychology of face recognition.
The sorts of tasks, primarily social, that shaped the evolution of
human brains. The computational load of social cognition as the
driving factor for the evolution of large brains. How the
degrees-of-freedom within faces and between faces are extracted and
encoded by specialised areas of the brain concerned with the
detection, recognition, and interpretation of faces and facial
expressions. Efforts to simulate these faculties in artificial systems.
- Artificial neural networks for pattern recognition.
A brief history of artificial neural networks and some successful
applications. Central concepts of learning from data, and foundations
in probability theory. Regression and classification problems viewed
as non-linear mappings. Analogy with polynomial curve fitting.
General ``linear'' models. The curse of dimensionality, and the
need for adaptive basis functions.
- Probabilistic inference.
Bayesian and frequentist views of probability and uncertainty.
Regression and classification expressed in terms of probability
distributions. Density estimation. Likelihood function and maximum
likelihood. Neural network output viewed as conditional mean.
- Network models for classification and decision theory.
Probabilistic formulation of classification problems. Prior and
posterior probabilities. Decision theory and minimum misclassification
rate. The distinction between inference and decision. Estimation of
posterior probabilities compared with the use of discriminant functions.
Neural networks as estimators of posterior probabilities.
At the end of the course students should
- be able to describe key aspects of brain function and neural processing
in terms of computation, architecture, and communication
- be able to analyse the viability of distinctions such as
computing versus communicating, signal versus noise, and
algorithm versus hardware, when these dichotomies from Computer
Science are applied to the brain
- understand the neurobiological mechanisms of vision well enough to think
of ways to implement them in machine vision
- understand basic principles of the design and function of artificial
neural networks that learn from examples and solve problems in classification
and pattern recognition
Aleksander, I. (1989). Neural Computing Architectures.
North Oxford Academic Press.
Bishop, C.M. (1995). Neural Networks for Pattern Recognition.
Oxford University Press.
Haykin, S. (1994). Neural Networks: A Comprehensive Foundation.
Hecht-Nielsen, R. (1991). Neurocomputing. Addison-Wesley.
Lecturer: Dr John Daugman
Taken by: Part II
Number of lectures: 15
Lecture location: Lecture Theatre 2
Lecture times: 12:00 on TT starting 17-Jan-02
(For those interested in the idea of "vision as graphics, rather than fidelity
to the image," some compelling visual illusions have been assembled by Peter Ford
19 February 2002: Exercises 15(B) and 16(B)
21 February 2002: Exercise 17
(For those interested in the idea of "social computing as the primary
computational load shaping the evolution of the human brain," some compelling
if amusing answers to a past Tripos paper question about this topic can be found
26 February 2002: Exercises 3 and 15(A)
Please read the overview article at the end of the Lecture Notes.
5 March 2002: Exercises 4 and 6
7 March 2002: Exercises 5 and 7.1
12 March 2002: Exercises 9 and 13