My research concerns problems at the boundary between people and computers. This involves investigating new technologies to enhance communication between computers and their users, and new applications to exploit these technologies.
For some years, I have been pioneering video input and output as part of the user interface. The idea is to develop augmented environments in which everyday objects acquire computational properties, rather than virtual environments where the user is obliged to inhabit a synthetic world. Xerox sponsored three of my research students, who laid the groundwork for a new model of interaction based on video user interfaces. Together we built a user interface based on video projection and digital cameras, extended this for remote collaboration, and investigated the use of a camera for input alone.
- Pierre Wellner: Interacting with paper on the DigitalDesk, Communications of the ACM 36(7), July 1993, pp 87-96.
- Pierre Wellner & Steve Freeman: The DoubleDigitaldesk: Shared editing of paper documents, Rank Xerox EuroPARC Technical Report EPC-93-108, 1993.
- Quentin Stafford-Fraser & Peter Robinson: BrightBoard - a video augmented environment, Proceedings ACM Conference on Computer-Human Interaction, Vancouver BC, April 1996, pp 134-141.
The research continued with support from the EPSRC to investigate combinations of electronic and conventional publishing, with applications in education.
- Peter Robinson, Dan Sheppard, Richard Watts, Robert Harding & Steve Lay: A framework for interacting with paper, Computer Graphics Forum 16(3), September 1997, pp 329-334.
- Heather Brown, Peter Robinson, Dan Sheppard, Richard Watts, Robert Harding & Steve Lay: Active Alice - using real paper to interact with electronic text, Proceedings 7th International Conference on Electronic Publishing, Saint Malo, April 1998, pp 407-419.
- Peter Robinson: Digital manuscripts and electronic publishing, Editio 13, Autumn 1999, pp 337-346.
Thales Research & Technology have funded further work to consider very large projected displays and support for collaboration. This involves two-handed input using new tools to replace the keyboard and mouse, and also more general questions of visual interaction beyond the conventional desktop metaphor. We are continuing the work with a broader investigation of shared media spaces.
- Mark Ashdown & Peter Robinson: Escritoire: a personal projected display, IEEE MultiMedia 12(1), January 2005, pp 34-42.
- Mark Ashdown & Peter Robinson: Remote collaboration on desk-sized displays, Computer animations and virtual worlds 16(1), February 2005, pp 41-51.
- Phil Tuddenham & Peter Robinson: Territorial coordination and workspace awareness in remote tabletop collaboration, ACM Conference on Human Factors in Computer Systems, Boston, MA, April 2009.
- The T3 tabletop toolkit is freely available for research use.
A further use of cameras is to observe users and infer their mental states. Another student has made considerable progress in recognising complex emotions that develop over several seconds from video images of a subject's face. This has several commercial applications. The work used video clips of professional actors for training and initial evaoluation; further trials were then conducted with emotions acted by delegates at a conference.
- Rana El Kaliouby & Peter Robinson: Real-time inference of complex mental states from facial expressions and head gestures, In Real-time vision for HCI, Springer 2005, pp 181-200.
- Rana El Kaliouby & Peter Robinson: Generalization of a Vision-Based Computational Model of Mind-Reading, International Conference on Affective Computing and Intelligent Interaction, Beijing, October 2005.
- Rana El Kaliouby & Peter Robinson: The emotional hearing aid - an assistive tool for autism, HCI International, Crete, June 2003.
- Mind-reading machines exhibit at the Royal Society's 2006 Summer Science Exhibition.
The research broadened to consider naturally evoked emotions, to draw information from other channels such as sound and posture, and to consider applications.
- Tal Sobol-Shikler & Peter Robinson: Classification of complex information: inference of co-occurring affective states from their expressions in speech, IEEE Transactions on Pattern Analysis and Machine Intelligence, 2009.
- Daniel Bernhardt & Peter Robinson: Detecting emotions from connected action sequences, International Visual Informatics Conference, Kuala Lumpur, November 2009.
- Shazia Afzal & Peter Robinson: Natural affect data - collection & annotation in a learning context, International Conference on Affective Computing and Intelligent Interaction, Amsterdam, September 2009.
- Laurel Riek & Peter Robinson: Real-time empathy: facial mimicry on a robot, AFFINE Workshop, Crete, October 2008.
I have also pursued a parallel line of research into inclusive user interfaces. A collaboration with the Engineering Design Centre has pursued questions of physical handicap, and research students have considered visual handicaps. This has broader applications for interaction with ubiquitous computers, where the input and output devices themselves impose limitations.
- Simeon Keates, John Clarkson & Peter Robinson: Investigating the applicability of user models for motion-impaired users, Proceedings ACM Conference on Assistive Technologies, Washington DC, November 2000, pp 129-136.
- Silas Brown & Peter Robinson: Transformation frameworks and their relevance in universal design, Universal Access in the Information Society 3(3-4), Springer-Verlag, ISSN 1615-5289, pp 209-223, October 2004.
- Pradipta Biswas & Peter Robinson: Automatic evaluation of assistive interfaces, International Conference on Intelligent User Interfaces, Canary Islands, January 2008, pp 247-256.
Finally, I work with colleagues at IBM on topics at the convergence of computing and communications to provide ubiquitous computing, and with colleagues at MIT on applications to support education.
- John Fawcett & Peter Robinson: Adaptive routing for road traffic, IEEE Computer Graphics and Applications 20(3), May/June 2000, pp 46-53.
- Maja Vukovic & Peter Robinson: Adaptive, planning-based, Web service composition for context awareness, International Conference on Pervasive Computing, Vienna, April 2004.
- William Billingsley, Peter Robinson, Mark Ashdown & Chris Hanson: Intelligent tutoring and supervised problem solving in the browser, IADIS WWW/Internet, Madrid, Spain, October 2004.
|1984-1989||Xerox University Grant at Cambridge (~£500k).|
|1989-1992||HOL verification of Ella designs (SERC £137k).|
|1991-1996||Video user interfaces (three Xerox research studentships).|
|1994-1996||World class software (Teaching Company Scheme £267k).|
|1994-1997||Self-timed logic (EPSRC £135k).|
|1994-1997||Managing mobile connections (IBM research studentship).|
|1995-1998||Animated paper documents (EPSRC £268k).|
|1998-2002||Self-timed microcontrollers (EPSRC £536k).|
|1998-2002||New paradigms for visual interaction (EPSRC £230k).|
|1999-2002||Computer assistance for motion-impaired users (EPSRC £256k).|
|1999-2002||Personal projected displays (Thales research studentship).|
|2000-2003||Domestic user interfaces (AT&T CASE studentship).|
|2001-2003||VLSI structures for globally asynchronous systems (EPSRC £185k).|
|2003-2006||The intelligent book (CMI £200k + $416k at MIT).|
|2003-2005||Context-aware computing (IBM research studentship).|
|2004||Data conversion for accessibility (IBM $40k).|
|2004-2006||Sudden impact (CMI £240k + $565k at MIT).|
|2004-2007||Shared media spaces (Thales CASE studentship).|
|2005-2008||Affective inference for driver monitoring (Toyota Motor Corporation £190k)|
|2005-2009||Empathic avatars (EPSRC £265k + £295k at UCL)|
|2006-2008||Presenccia (EU €241k + 13 other partners)|
|2006-2007||Transforming Perspectives: technology to support the teaching and learning of threshold concepts (ESRC £47k)|
|2008||EECS Curriculum Workshop (CMI £18k + $28k at MIT).|
|2008-2011||Affective computing in control environments (Thales CASE studentship).|
|2009-2012||Affective interaction (Thales CASE studentship).|
|2011-2014||ASC Inclusion (EU €1.9m).|
|2012-2014||Multimodal and cognition-aware systems (EU €200k).|
|2013-2015||Deterrence of deception in socio-technical systems (EPSRC £966k).|
|2013-2016||Personalised health monitoring system (EU €295k).|
|2015-2017||Vision based research for the automotive domain (JLR £571k).|
Research collaborators and visitors
|1995-1998||Robert Harding, Steve Lay, Dan Sheppard & Richard Watts|
|1997||Professor Heather Brown (University of Kent)|
|1998-2002||Alan Blackwell & Rachel Hewson|
|1998-2003||Steev Wilcox, George Taylor & Bob Mullins|
|1999-2002||John Clarkson & Simeon Keates|
|2000||Professor Frederick Brooks (University of North Carolina)|
|2002-2005||Douglas Dykeman & Stefan Hild (IBM Zurich)|
|2003-2005||Professors Hal Abelson & Gerry Sussman (MIT)|
|2003-2005||Mark Ashdown & Kazim Rehman|
|2004-2006||Professor Eric Grimson (MIT)|
|2007||Professor Yoichi Sato (University of Tokyo) & Professor Imari Sato (Japanese National Institute of Informatics)|
|2007-2008||Professor Frederick Brooks (University of North Carolina)|
|2007-2008 & 2011-2013||Ian Davies|
|2013||Professor Rafael Calvo(University of Sydney)|