Course material 2010–11

##

Artificial Intelligence II

*Lecturer: Dr S.B. Holden*

*No. of lectures:* 16

*Prerequisite courses: Artificial Intelligence I, Logic and Proof, Algorithms I + II, Mathematical Methods for Computer Science, Discrete Mathematics I + II, Probability/Probability from the NST Mathematics course*.

**Aims**

The aim of this course is to build on Artificial Intelligence I, first by introducing more elaborate methods for planning within the symbolic tradition, but then by moving beyond the purely symbolic view of AI and presenting methods developed for dealing with the critical concept of uncertainty. The central tool used to achieve the latter is probability theory. The course continues to exploit the primarily algorithmic and computer science-centric perspective that informed Artificial Intelligence I.

The course aims to provide further tools and algorithms required to produce AI systems able to exhibit limited human-like abilities, with an emphasis on the need to obtain better planning algorithms, and systems able to deal with the uncertainty inherent in the environments that most real agents might be expected to perform within.

**Lectures**

**Further planning.**Incorporating heuristics into partial-order planning. Planning graphs. The GRAPHPLAN algorithm. Planning using propositional logic. Planning as a constraint satisfaction problem. [3 lectures]**Uncertainty and Bayesian networks.**Review of probability as applied to AI. Representing uncertain knowledge using Bayesian networks. Inference in Bayesian networks using both exact and approximate techniques. Other ways of dealing with uncertainty. [3 lectures]**Utility and decision-making.**The concept of utility. Utility and preferences. Deciding how to act by maximizing expected utility. Decision networks. The value of information, and reasoning about when to gather more. [2 lectures]**Uncertain reasoning over time.**Markov processes, transition and sensor models. Inference in temporal models: filtering, prediction, smoothing and finding the most likely explanation. The Viterbi algorithm. Hidden Markov models. [2 lectures]**Further supervised learning I.**Bayes theorem as applied to supervised learning. The maximum likelihood and maximum a posteriori hypotheses. What does this teach us about the backpropagation algorithm? [1 lecture]**. How to classify optimally.**Bayesian decision theory and Bayes optimal classification. What does this tell us about how best to do supervised machine learning? [1 lecture]**Further supervised learning II.**Applying the Bayes optimal classification approach to neural networks. [3 lectures]**Reinforcement learning.**Learning from rewards and punishments. Markov decision processes. The problems of temporal credit assignment and exploration versus exploitation. Q-learning and its convergence. How to choose actions. [1 lecture]

**Objectives**

At the end of this course students should:

- Have gained a deeper appreciation of the way in which computer science has been applied to the problem of AI, and in particular for more recent techniques concerning knowledge representation, inference, planning and uncertainty.
- Know how to model situations using a variety of knowledge representation techniques.
- Be able to design problem solving methods based on knowledge representation, inference, planning, and learning techniques.
- Know how probability theory can be applied in practice as a means of handling uncertainty in AI systems.

**Recommended reading**

* Russell, S. & Norvig, P. (2003). *Artificial intelligence: a modern approach*. Prentice Hall (3rd ed.).

Bishop, C.M. (2006). *Pattern recognition and machine learning*. Springer.

Ghallab, M., Nau, D. & Traverso, P. (2004). Automated planning: theory and practice. Morgan Kaufmann.