Computer Laboratory

Demonstrations of mind-reading

Rana el Kaliouby & Peter Robinson

The following examples demonstrate how our system analyzes a video stream containing head and facial displays of a person to infer that person's mental state. The videos were recorded during a demonstration at the IEEE International conference on Computer Vision and Pattern Recognition (CVPR) in Washington DC, 2004. We are very grateful to all those who took part in the video collection exercise.

The videos were collected as follows:

  • 16 participants from the CVPR audience were asked to pose for several mental states.
  • They were not given any instructions or hints on how to act a particular mental state.
  • They were asked to maintain a frontal pose as much as possible.
  • They were allowed to talk as well as make facial expressions.
  • The background of this setup was another demonstration booth, so people were moving in and out of the scene all the time.
  • We relied on the existing lighting in the conference room.

The videos are in Real (.rm) format. You may need to load a program like RealPlayer or RealAlternative to show them.

Agreeing
Agreeing Note the overlapping asynchronous head and facial displays.
Disagreeing
Disagreeing Note the oblique angle and that the person is talking.
Thinking
Thinking Note the spectacles and beard.
Unsure
Unsure Note the activity in the background.
Tracking failure
Incorrect The tracker fails with rapid shaking of the head.