Neural Computing
Lecturer: Dr John Daugman
Prerequisite courses: Continuous Mathematics, Probability
Aims
The aims of this course are to investigate how biological nervous systems
accomplish the goals of machine intelligence but while using radically
different strategies, architectures, and hardware; and to investigate how
artificial neural systems can be designed that try to emulate some of those
biological principles in the hope of capturing some of their performance.
Lectures
- Natural versus artificial substrates of intelligence.
Comparison of the differences between biological and artificial
intelligence in terms of architectures, hardware, and strategies.
Levels of analysis; mechanism and explanation; philosophical issues.
Basic neural network architectures compared with rule-based or
symbolic approaches to learning and problem-solving.
- Neurobiological wetware: architecture and function of the brain.
Human brain architecture. Sensation and perception; learning and memory.
What we can learn from neurology of brain trauma; modular organisation
and specialisation of function. Aphasias, agnosias, apraxias.
How stochastic communications media, unreliable and randomly distributed
hardware, very slow and asynchronous clocking, and imprecise connectivity
blueprints, give us unrivalled performance in real-time tasks involving
perception, learning, and motor control.
- Neural processing and signalling.
Information content of neural signals. Spike generation processes.
Neural hardware for both processing and communications. Can the
mechanisms for neural processing and signalling be viably separated?
Biophysics of nerve cell membranes and differential ionic
permeability. Excitable membranes. Logical operators.
- Stochasticity in neural codes.
Principal Components Analysis of spike trains. Evidence for detailed
temporal modulation as a neural coding and communications strategy.
Is stochasticity also a fundamental neural computing strategy for
searching large solution spaces, entertaining candidate hypotheses
about patterns, and memory retrieval? John von Neumann's conjecture.
Simulated annealing.
- Neural operators that encode, analyse, and represent image
structure.
How the mammalian visual system, from retina to brain, extracts information
from optical images and sequences of them to make sense of the world.
Description and modelling of neural operators in engineering terms as
filters, coders, compressors, and pattern matchers.
- Cognition and evolution. Neuropsychology of face recognition.
The sorts of tasks, primarily social, that shaped the evolution of
human brains. The computational load of social cognition as the
driving factor for the evolution of large brains. How the
degrees-of-freedom within faces and between faces are extracted and
encoded by specialised areas of the brain concerned with the
detection, recognition, and interpretation of faces and facial
expressions. Efforts to simulate these faculties in artificial systems.
- Artificial neural networks for pattern recognition.
A brief history of artificial neural networks and some successful
applications. Central concepts of learning from data, and foundations
in probability theory. Regression and classification problems viewed
as non-linear mappings. Analogy with polynomial curve fitting.
General "linear" models. The curse of dimensionality, and the
need for adaptive basis functions.
- Probabilistic inference.
Bayesian and frequentist views of probability and uncertainty.
Regression and classification expressed in terms of probability
distributions. Density estimation. Likelihood function and maximum
likelihood. Neural network output viewed as conditional mean.
- Network models for classification and decision theory.
Probabilistic formulation of classification problems. Prior and
posterior probabilities. Decision theory and minimum misclassification
rate. The distinction between inference and decision. Estimation of
posterior probabilities compared with the use of discriminant functions.
Neural networks as estimators of posterior probabilities.
Objectives
At the end of the course students should
- be able to describe key aspects of brain function and neural processing
in terms of computation, architecture, and communication
- be able to analyse the viability of distinctions such as
computing versus communicating, signal versus noise, and
algorithm versus hardware, when these dichotomies from Computer
Science are applied to the brain
- understand the neurobiological mechanisms of vision well enough to think
of ways to implement them in machine vision
- understand basic principles of the design and function of artificial
neural networks that learn from examples and solve problems in classification
and pattern recognition
Reference books
Bishop, C.M. (1995). Neural Networks for Pattern Recognition.
Oxford University Press.
Haykin, S. (1994). Neural Networks: A Comprehensive Foundation.
Macmillan.
Aleksander, I. (1989). Neural Computing Architectures.
North Oxford Academic Press.
Taken by: Part II
Number of lectures: 16