Computer Laboratory Home Page Search A-Z Directory Help
University of Cambridge Home Computer Laboratory
Computer Science Syllabus - Artificial Intelligence II
Computer Laboratory > Computer Science Syllabus - Artificial Intelligence II

Artificial Intelligence II next up previous contents
Next: Computer Systems Modelling Up: Michaelmas Term 2005: Part Previous: Michaelmas Term 2005: Part   Contents

Artificial Intelligence II

Lecturer: Dr S.B. Holden

No. of lectures and examples classes: 12 + 4

Prerequisite courses: Artificial Intelligence I, Logic and Proof, Continuous Mathematics (Mathematical Methods for Computer Science from 2006), Discrete Mathematics, Probability


The aim of this course is to build on Artificial Intelligence I, first by introducing more elaborate methods for knowledge representation and planning within the symbolic tradition, but then by moving beyond the purely symbolic view of AI and presenting methods developed for dealing with the critical concept of uncertainty. The central tool used to achieve the latter is probability theory. The course continues to exploit the primarily algorithmic and computer science-centric perspective that informed Artificial Intelligence I.

The course aims to provide further tools and algorithms required to produce AI systems able to exhibit limited human-like abilities, with an emphasis on the need to obtain richer forms of knowledge representation, better planning algorithms, and systems able to deal with the uncertainty inherent in the environments that most real agents might be expected to perform within.


  • Further symbolic knowledge representation. Representing knowledge using First Order Logic (FOL). The situation calculus. [1 lecture]

  • Further planning. Incorporating heuristics into partial-order planning. Planning graphs. The GRAPHPLAN algorithm. Planning using propositional logic. [2 lectures]

  • Uncertainty and Bayesian networks. Review of probability as applied to AI. Bayesian networks. Inference in Bayesian networks using both exact and approximate techniques. Other ways of dealing with uncertainty. [3 lectures]

  • Utility and decision-making. Maximising expected utility, decision networks, the value of information. [1 lecture]

  • Further supervised learning. Bayes theorem as applied to supervised learning. The maximum likelihood and maximum a posteriori hypotheses. Applying the Bayesian approach to neural networks. [3 lectures]

  • Uncertain reasoning over time. Markov processes, transition and sensor models. Inference in temporal models: filtering, prediction, smoothing and finding the most likely explanation. Hidden Markov models. [2 lectures]


At the end of this course students should

  • have gained a deeper appreciation for the way in which computer science has been applied to the problem of AI, and in particular for more recent techniques concerning knowledge representation, inference, planning and uncertainty

  • know how to model situations using a variety of knowledge representation techniques

  • be able to design problem solving methods based on knowledge representation, inference, planning, and learning techniques

  • know how probability theory can be applied in practice as a means of handling uncertainty in AI systems

Recommended reading

* Russell, S. & Norvig, P. (2003). Artificial intelligence: a modern approach. Prentice-Hall (2nd ed.).
Bishop, S. (1995). Neural networks for pattern recognition. Oxford University Press.

next up previous contents
Next: Computer Systems Modelling Up: Michaelmas Term 2005: Part Previous: Michaelmas Term 2005: Part   Contents
Christine Northeast
Sun Sep 11 15:46:50 BST 2005