Computer Laboratory

Emotionally intelligent interfaces

Andra Adams, Tadas Baltrušaitis, Ntombi Banda, Marwa Mahmoud, Quentin Stafford-Fraser, Erroll Wood, Heng Yang & Peter Robinson

Photographs of faces
Expressions of emotions [Darwin 1872]

With the rapid advances in key computing technologies and the heightened user expectation of computers, the development of socially and emotionally adept technologies is becoming a necessity. This project is investigating the inference of people's mental states from facial expressions, vocal nuances, body posture and gesture, and other physiological signals, and also considering the expression of emotions by robots and cartoon avatars.

Facial expressions provide an important spontaneous channel for the communication of both emotional and social displays. They are used to communicate feelings, show empathy, and acknowledge the actions of other people.

In this research we investigate how facial expression information can be used as part of a wider context to make useful inferences about a user's mental state in a natural computing environment, in such a way that increases usability. We draw inspiration from various emotion theories on the role of facial expressions in inferring mental states, most notably the role of temporal and situational context in the process.


Testing our inference system has shown the computer to be as accurate as the top 6% of people. But would we want computers that can react to our emotions? Such systems do raise ethical issues: Imagine a computer that could pick the right emotional moment to try to sell you something. There are, however, applications with clear benefits including an emotional hearing aid to assist people with autism, usability testing for software, feedback for on-line teaching, and informing the animation of cartoon figures.

We have been working since 2004 on a wearable system that helps people with Autism Spectrum Conditions and Asperger Syndrome, with emotional-social understanding and mind-reading functions. Rana el Kaliouby, who was awarded a PhD for her work on the project, is currently implementing the first prototype of the system at the Massachusetts Institute of Technology's Media Lab.

Metin Sezgin, joined the team in Cambridge to look at ways of improving the inference of mental states by combining multiple sources of information, including biometric sensors. Tal Sobol-Shikler investigated the effects of emotions on non-verbal cues in speech for her PhD at Cambridge, and is now pursuing the work at Ben-Gurion University. Daniel Bernhardt, another research student, extended the system to recognise further channels of affective communication such as posture and gesture. Shazia Afzal investigated applications of affective inference to support on-line teaching systems, and Laurel Riek looking at the expression of emotions by humanoid robots.

Another important area is discerning drivers' mental states. If a driver gets lost while trying to find a route through an unfamiliar city in heavy traffic, the burden of understanding advice from a navigational system could actually be more of a hindrance than a help. We have been working with a major motor manufacturer on systems to detect when a driver is confused, distracted, drowsy or even upset, and adapt the car's telematic systems accordingly.

Two post-docs and three research students are currently working on affective computing: Tadas Baltrušaitis is considering affect in remote communications, Andra Adams is working on affective robotics for autism spectrum conditions, Ntombi Banda is working on fusion techniques for affective inference, Marwa Mahmoud is working on multi-modal inference of occluded gestures, and Vaiva Imbrasaitė is looking at the effects of music on emotions.

Further information

Please follow the links on the left or below for specific projects:

Please check the frequently asked questions or contact Peter Robinson for further information.


Press coverage

Other contributors