Computer Laboratory

Project suggestions from the Graphics & Interaction Group

Originator: Hatice Gunes

Computational Analysis of Personality

Individuals’ interactions with others are shaped by their personalities and their impressions regarding others' behaviours and personalities. Automatic personality analysis from nonverbal behaviours has therefore an impact on improving humans' interaction experience with socially intelligent systems including humanoid robots. There is already a significant body of work focusing on personality analysis from video and audio data modalities. This project aims to explore other data modalities such as bio-signals and novel multimodal fusion methods in a dyadic interaction scenario.

Requirements: This project would benefit from studying machine learning.

Transference

Originator: Hatice Gunes

Joint rhythms have been found to be positive for a sense of belongingness, kindness to strangers, and liking. Synchronised movement such as singing, dancing, and even laughing, is now understood as social clue that has benefits for communities but also individual emotions and well-being. This project focuses on computational data mining of a recorded public event called Transference where around 100 people were recorded using multiple sensors while they were wearing robot-lets. Computational analysis will focus on changes in rhythm and well-being indicators (relaxation, calmness, boredom) for which the the recordings have been annotated by experts.

Requirements: This project would benefit from previous experience with computer vision.

Live coded video processing architecture

Originator: Alan Blackwell with Sam Aaron

Sam Aaron's Sonic Pi language for live-coded music is a local product that has achieved huge popularity and media coverage. Funded by the Raspberry Pi Foundation, it is also availabnle for Mac and PC platforms.

The goal of this project is to create an architecture for interacting with multiple video streams, modifiable in real time through live-coding styles of software development. The starting point will be modelled on the Supercollider architecture for real-time processing of audio and event streams, but extended to support image and video frame data. It may not be necessary to use the Supercollider source code, although this will provide a source of guidance. The implementation language is likely to be C++.

Evaluation can be done in the Lab, based on video throughput, time resolution, latency and jitter from filter operations. As an extension, it would also be possible to package "Video Pi" (or choose your own name if you like) into the Sonic Pi editing environment so that kids can create their own video mixers or filters connected to an external camera. There will also be opportunities for future funded research in this area.

Novel interactive data visualisations

Originators: Alan Blackwell with Advait Sarkar and Ben Azvine (BT research)

Large networks of the kind operated by BT are a source of huge quantities of time-varying data with many variables. A wealth of information can be extracted from such data, but initial exploration of the dataset may be formidable, particularly when the features of the dataset are unknown. There are a few standard means of data visualisation including trend graphs, bubble diagrams, network diagrams, pie charts, geographical maps, sun ray diagrams, and radial views. However, these represent a relatively limited range of statistical methods. The goal of this project is to build on the capabilities of statistical analysis packages such as R and NumPy, to create new visualisations as an intuitive alternative to existing statistics packages and open source visualisation tools.

Musical interfaces to programming languages

Originators: Alan Blackwell and Sam Aaron

Live Coding is an exciting music genre featured not only in experimental art music, but in jazz improvisation and Algoraves (featured in Wired Magazine in August). Cambridge is one of the international centres of Live Coding research and performance. The goal of this project will be to integrate some newly acquired musical equipment (including a MIDI drumkit) into the programming environments used in improvised programming.

Control room attention models

Originators: Peter Robinson

Many complex industrial systems are run from central control rooms where operators monitor information on multiple screens to identify anomalous conditions. Current design tools for control rooms are limited to 3D models of the hardware which can be used to assess the physical ergonomics, but do not help understand the work of human operators.

This project focuses on developing computational models for predicting the operators' attention so that the human-machine interface could be evaluated and configured properly during control room design. These models are expected to improve arrangement of information shown through the HMI and lessen the operators' risk of missing important information in critical situations. This will involve predicting visual search patterns over an extended display.

Visual services

Originator: Peter Robinson

Tangible user interfaces could be built using a standard mobile phone and 2D bar-codes [Controlled availability of pervasive Web services, IEEE ICDCS 2003]. Design, build and evaluate such a system.

Computational photography

Originator: Rafal Mantiuk

Modern cameras, for example, those found in smartphones, can capture high-quality images despite very small form factor of the sensor and the lens. Such good results can be achieved with the help of computational algorithms that reconstruct the best possible image from noisy or incomplete data, often with the help of machine learning methods.

The project in this area will involve developing a computational photography method that reconstructs visual information (images, video, depth, light field, proxy geometry, …) from camera images of video frames. The methods typically involve numerical optimization and/or deep learning techniques.

Rendering for future display devices

Originator: Rafal Mantiuk

One of the big changes we can observe in recent years is the emergence of new display devices and technologies, such as high-refresh-rate monitors, high-dynamic-range displays, VR/AR headsets or varifocal displays. Most of the computer graphics rendering techniques have been developed for flat-panel displays and they do not exploit the new features and limitations of those new display technologies. This results in sub-optimal utilization of computational resources or not taking full advantage of the new display capabilities.

The project in this area will involve developing a computer graphics rendering method that would target a particular future display technology. The method should exploit a new capability or a limitation of such a display technology. Some knowledge of computer graphics techniques is required for this project.

Perception-limited rendering

Originator: Rafal Mantiuk

Computer graphics techniques often do not need to be physically accurate. The human eye can tolerate a certain amount of error, which in turn can be utilized to free some computational resources.

The project in this area will involve developing a rendering method that would rely on the limitations of the visual system to adaptively direct computational resources where they are needed the most. The techniques could include foveated rendering, adaptive refresh rate/resolution or temporal antialiasing. Some knowledge of computer graphics techniques is required for this project.

Propose your own project

The Graphics & Interaction Group has a range of interesting hardware. Consider the useful research that could be done if you had access to this and propose something novel and interesting.