Tadas Baltrušaitis, a research student in the Graphics & Interaction Group, was on the winning team for the 2011 International Audio-Visual Emotion Challenge.
Tadas presented a paper on “Modeling Latent Discriminative Dynamic of Multi-Dimensional Affective Signals” at the fourth International Conference on Affective Computing and Intelligent Interaction in Memphis this week, reporting on a system that won the video sub-challenge in the competition.
The Audio-Visual Emotion Challenge (AVEC 2011) was the first competition event aimed at comparison of automatic audio, visual and audiovisual emotion analysis. The goal of the challenge was to provide a common benchmark test set for individual multimodal information processing and to bring together the audio and video emotion recognition communities, to compare the relative merits of the two approaches to emotion recognition under well-defined and strictly comparable conditions and establish to what extent fusion of the approaches is possible and beneficial. A further motivation was the need to advance emotion recognition systems to be able to deal with naturalistic behaviour in large volumes of un-segmented, non-prototypical and non-preselected data as this is exactly the type of data that both multimedia retrieval and human-machine/human-robot communication interfaces have to face in the real world.
See http://www.cl.cam.ac.uk/research/rainbow/emotions/ for more information about research into affective computing in the Computer Laboratory.