Computer Laboratory

Facial Expression Synthesis

It is not too difficult to imagine a future in which we interact with virtual agents: help desks, museum guides, online shopping, and distance learning are just some of the example situation where we could encounter such agents. We want them to be able to communicate with as as we do with other human beings. For example your whole experience might be improved if the virtual character would look embarassed if it did not know an answer to your question, or your virtual tutor would look encouraging when you are struggling with a problem.

We would like such virtual characters to behave realistically. This involves understanding the signals we use in non-verbal communication, and enabling the virtual character to display such signals. I concentrate on a subset of non-verbal cues: facial expressions, head gestures and gaze.

Such understanding can be achieved through the analysis of data collected from humans interacting with each other or with computers. This data can then be used to teach the virtual characters to express themselves in an appropriate manner.

An example of data based synthesis, a robotic head, a virtual character, and a stick figure drawing are animated using human facial expression.

Some example videos of facial expression and head gesture transfer from human expressions to various platforms can be found here and here.

Relevant publications

  • A Facial Affect Mapping Engine
    Leonardo Impett, Tadas Baltrušaitis, and Peter Robinson
    in International Conference on Intelligent User Interfaces (IUI), Haifa, Israel, February 2013
    [pdf][video][code soon]
  • A Facial Affect Mapping Engine (FAME)
    Leonardo Impett, Tadas Baltrušaitis, and Peter Robinson
    in International Workshop on Intelligent Digital Games for Empowerment and Inclusion, IUI, Haifa, Israel, February 2013
    [pdf]
  • The emotional computer
    Peter Robinson, Tadas Baltrušaitis, Ian Davies, Tomas Pfister, Laurel Riek, and Kevin Hull
    in IEEE International Conference on Pervasive Computing, San Francisco, CA, June 2011.
    [Best video award]
  • Synthesizing Expressions using Facial Feature Point Tracking: How Emotion is Conveyed
    Tadas Baltrušaitis, Laurel D. Riek, and Peter Robinson
    in Proceedings of the ACM Workshop on Affective Interaction in Natural Environments (AFFINE '10), ACM-MM '10
    [pdf]
  • Facial Expression Synthesis
    Tadas Baltrušaitis, Laurel D. Riek, and Peter Robinson
    Poster at Microsoft Research Summer School. Jul 2010.
    [pdf]