Project suggestions from the Graphics & Interaction Group
Live coded video processing architecture
Sam Aaron's Sonic Pi language for live-coded music is a local product that has achieved huge popularity and media coverage. Funded by the Raspberry Pi Foundation, it is also availabnle for Mac and PC platforms.
The goal of this project is to create an architecture for interacting with multiple video streams, modifiable in real time through live-coding styles of software development. The starting point will be modelled on the Supercollider architecture for real-time processing of audio and event streams, but extended to support image and video frame data. It may not be necessary to use the Supercollider source code, although this will provide a source of guidance. The implementation language is likely to be C++.
Evaluation can be done in the Lab, based on video throughput, time resolution, latency and jitter from filter operations. As an extension, it would also be possible to package "Video Pi" (or choose your own name if you like) into the Sonic Pi editing environment so that kids can create their own video mixers or filters connected to an external camera. There will also be opportunities for future funded research in this area.
Originator: Peter Robinson
Touch screens driven by Raspberry Pi computers are being installed outside each office on the SS corridor. These are being configured initially to show calendars and to pass messages to and from people who are working away from the laboratory. However, there are many other possibilities. Think of one and implement it!
Novel interactive data visualisations
Originators: Alan Blackwell with Advait Sarkar and Ben Azvine (BT research)
Large networks of the kind operated by BT are a source of huge quantities of time-varying data with many variables. A wealth of information can be extracted from such data, but initial exploration of the dataset may be formidable, particularly when the features of the dataset are unknown. There are a few standard means of data visualisation including trend graphs, bubble diagrams, network diagrams, pie charts, geographical maps, sun ray diagrams, and radial views. However, these represent a relatively limited range of statistical methods. The goal of this project is to build on the capabilities of statistical analysis packages such as R and NumPy, to create new visualisations as an intuitive alternative to existing statistics packages and open source visualisation tools.
Musical interfaces to programming languages
Originators: Alan Blackwell and Sam Aaron
Live Coding is an exciting music genre featured not only in experimental art music, but in jazz improvisation and Algoraves (featured in Wired Magazine in August). Cambridge is one of the international centres of Live Coding research and performance. The goal of this project will be to integrate some newly acquired musical equipment (including a MIDI drumkit) into the programming environments used in improvised programming.
Control room attention models
Originators: Peter Robinson
Many complex industrial systems are run from central control rooms where operators monitor information on multiple screens to identify anomalous conditions. Current design tools for control rooms are limited to 3D models of the hardware which can be used to assess the physical ergonomics, but do not help understand the work of human operators.
This project focuses on developing computational models for predicting the operators' attention so that the human-machine interface could be evaluated and configured properly during control room design. These models are expected to improve arrangement of information shown through the HMI and lessen the operators' risk of missing important information in critical situations. This will involve predicting visual search patterns over an extended display.
Driving through Google street view
Originators: Peter Robinson
Google street view presents an environment as a series of still pictures, with an awkward user interface. It would be better if the view changed continuously and could be controlled from a driving simulator.
This first part of the project would be build a rendering system that takes a sequence of images from Google street view and interpolates smoothly between them. This would also involve transforming the images to a consistent frame and handling the images of other vehicles in some way. The second part of the project would be to provide better control, and would involve pre-fetching of images for a variety of paths through the data to give smooth real-time performance.
The big question would be to decide between a model-based approach and an image-based approach. On the one hand, extracting a 3D model from the Street View images would be hard, but it would allow smooth animation while driving. That would require a method for removing inconsistent clutter like people and other vehicles. On the other hand, smooth interpolation between discrete images seems like a hard problem.
Requirements: This project would benefit from previous experience with computer vision. It would make a hard MPhil project and is too large for a Part II project.
Originator: Peter Robinson
Tangible user interfaces could be built using a standard mobile phone and 2D bar-codes [Controlled availability of pervasive Web services, IEEE ICDCS 2003]. Design, build and evaluate such a system.
Propose your own project
The Graphics & Interaction Group has a range of interesting hardware. Consider the useful research that could be done if you had access to this and propose something novel and interesting.