2001-02

**Lecturer:** Dr John Daugman

*Prerequisite courses: Continuous Mathematics, Probability*

**Natural versus artificial substrates of intelligence.**Comparison of the differences between biological and artificial intelligence in terms of architectures, hardware, and strategies. Levels of analysis; mechanism and explanation; philosophical issues. Basic neural network architectures compared with rule-based or symbolic approaches to learning and problem-solving.**Neurobiological wetware: architecture and function of the brain.**Human brain architecture. Sensation and perception; learning and memory. What we can learn from neurology of brain trauma; modular organisation and specialisation of function. Aphasias, agnosias, apraxias. How stochastic communications media, unreliable and randomly distributed hardware, very slow and asynchronous clocking, and imprecise connectivity blueprints, give us unrivalled performance in real-time tasks involving perception, learning, and motor control.**Neural processing and signalling.**Information content of neural signals. Spike generation processes. Neural hardware for both processing and communications. Can the mechanisms for neural processing and signalling be viably separated? Biophysics of nerve cell membranes and differential ionic permeability. Excitable membranes. Logical operators.**Stochasticity in neural codes.**Principal Components Analysis of spike trains. Evidence for detailed temporal modulation as a neural coding and communications strategy. Is stochasticity also a fundamental neural computing strategy for searching large solution spaces, entertaining candidate hypotheses about patterns, and memory retrieval? John von Neumann's conjecture. Simulated annealing.**Neural operators that encode, analyse, and represent image structure.**How the mammalian visual system, from retina to brain, extracts information from optical images and sequences of them to make sense of the world. Description and modelling of neural operators in engineering terms as filters, coders, compressors, and pattern matchers.**Cognition and evolution. Neuropsychology of face recognition.**The sorts of tasks, primarily social, that shaped the evolution of human brains. The computational load of social cognition as the driving factor for the evolution of large brains. How the degrees-of-freedom within faces and between faces are extracted and encoded by specialised areas of the brain concerned with the detection, recognition, and interpretation of faces and facial expressions. Efforts to simulate these faculties in artificial systems.**Artificial neural networks for pattern recognition.**A brief history of artificial neural networks and some successful applications. Central concepts of learning from data, and foundations in probability theory. Regression and classification problems viewed as non-linear mappings. Analogy with polynomial curve fitting. General ``linear'' models. The curse of dimensionality, and the need for adaptive basis functions.**Probabilistic inference.**Bayesian and frequentist views of probability and uncertainty. Regression and classification expressed in terms of probability distributions. Density estimation. Likelihood function and maximum likelihood. Neural network output viewed as conditional mean.**Network models for classification and decision theory.**Probabilistic formulation of classification problems. Prior and posterior probabilities. Decision theory and minimum misclassification rate. The distinction between inference and decision. Estimation of posterior probabilities compared with the use of discriminant functions. Neural networks as estimators of posterior probabilities.

At the end of the course students should

- be able to describe key aspects of brain function and neural processing
in terms of computation, architecture, and communication
- be able to analyse the viability of distinctions such as
computing
*versus*communicating, signal*versus*noise, and algorithm*versus*hardware, when these dichotomies from Computer Science are applied to the brain - understand the neurobiological mechanisms of vision well enough to think
of ways to implement them in machine vision
- understand basic principles of the design and function of artificial neural networks that learn from examples and solve problems in classification and pattern recognition

Bishop, C.M. (1995).

Haykin, S. (1994).

Hecht-Nielsen, R. (1991).

**Lecturer:** Dr John Daugman
(jgd1000@cl.cam.ac.uk)

**Taken by:** Part II

**Number of lectures:** 15

**Lecture location:** Lecture Theatre 2

**Lecture times:** 12:00 on TT starting 17-Jan-02

**
**

- Learning Guide, Lecture Summary, and Worked Examples (PDF document)
- Syllabus

- Past exam
questions

**Assignments from the Learning Guide:**- 24 January 2002: Exercise 1 and Exercise 10
- 31 January 2002: Exercise 16-B and Exercise 11
- 5 February 2002: Exercise 2
**12 February 2002: Exercises 6.2, 7 (except for 7.1) and 14.3**(For those interested in the idea of "vision as graphics, rather than fidelity to the image," some compelling visual illusions have been assembled by Peter Ford here.)

**19 February 2002: Exercises 15(B) and 16(B)****21 February 2002: Exercise 17**(For those interested in the idea of "social computing as the primary computational load shaping the evolution of the human brain," some compelling if amusing answers to a past Tripos paper question about this topic can be found here.)

**26 February 2002: Exercises 3 and 15(A)**

**Please read the overview article at the end of the Lecture Notes.**

**5 March 2002: Exercises 4 and 6**

**7 March 2002: Exercises 5 and 7.1**

**12 March 2002: Exercises 9 and 13**