Advanced audio-visual aids
The lecture theatres in the Computer Laboratory's new William Gates Building are being equipped with fairly elaborate audio-visual facilities to provide facilities for experiments as well as giving lectures. The suggestions here to improve and exploit the facilities might make good Part 2 CST projects but other proposals to use the infrastructure would also be welcome.
The two lecture theatres have similar equipment for audio-visual display. Inputs for microphones, programme sound, composite video, S-video and component video (XGA) are provided in the lecterns and benches, together with three channels for wireless microphones. These are fed back to an audio mixer and a set of video matrix switches in the Projection Room which route the outputs to loudspeakers for sound reinforcement and programme sound, an audio frequency induction loop and a pair of video projectors. There are also auxiliary inputs and outputs in the Projection Rooms and links between the two theatres.
In addition, the large lecture theatre has (well, will have?) a set of six video cameras mounted on the high level lighting bar. Two are placed to give wide-angle views of the front of the lecture theatre and of the audience. The other four are mounted on pan-and-tilt heads in the corners of the room. The two in the front corners are equipped with rifle microphones to pick up sound from the audience. The microphones are connected into the audio mixer and the output from the cameras is available both as composite video and as a digital stream.
The audio mixer, video matrix switches and the lecture theatre lights are all controlled through a single system which exports a user interface through a wall-mounted panel and also through a Web server.
The standard installation presents an API to the computers controlling the different devices and a simple Web server presents this to users. However, the user interface is unsatisfactory. The first project is to design and implement a more suitable interface for everyday use after consultation with users.
This might include a command-line interface at the lowest level and then layers offering facilities for defining and selecting preset scenes for general use and for individual users, or a graphical user interface perhaps resembling a physical patch panel.
The second project is to design and implement a control system for the cameras to facilitate recording and remote sharing of lectures. This will involve manual or automatic control of pan-and-tilt heads for the cameras and their zoom controls.
A system for manual floor control of audience contributions might use interaction with the wide-angle views from the static cameras. Alternatively, it might be possible to automate some of the process. One camera could take in the overall scene as a seminar is presented. Its image could be processed to locate the speaker and guide a second camera to record a close-up picture. Further cameras could take in any projected image or display board, their output being processed to separate out the relatively small number of different images shown together with any interactions with them. This would result in a video and sound recording of the speaker, digitised versions of each prepared visual aid, and a digital record of the speaker's interactions with the aids and with any board. A digital precis of the overall scene could also be produced by recording one frame every 10 seconds or so during the presentation, or whenever there was a substantial change in the scene, for example when a new transparency was placed on the overhead projector.
The complete lecture could be published as a DVD with the speaker as the main track, and the material being projected and the audience as alternative camera angles.
Automatic indexing of lectures
When reviewing a recording of a seminar or a lecture, it would help if it were possible to jump to a particular point of interest. One way to index the material would be to take written notes during the lecture and then control the replay simply by pointing at part of the notes. See Steve Whittaker's FiloChat paper in CHI '94 for the idea; Mik Lamming's NoTime is even closer.
The CrossPad is an ordinary 9"x6" note pad that includes a digitising tablet so that written strokes for about 80 pages can be recorded and subsequently transferred to a computer in IBM's Ink format for processing. This project involves using that information to control the replay of a digital recording.
The wide-angle camera directed at the audience could be used to allow audience participation in lectures. Individuals could be provided with coloured cards which could be held up to take votes or when the lecture was proceeding to quickly or slowly. The camera image could be monitored to recognise this taking place and inform the speaker accordingly.
It would even be possible to have video games played between different parts of the audience, taking a consensus view of the cards being displayed at the corresponding locations.
Interactive whiteboards are becoming increasingly popular. However, an ordinary whiteboard and a WebCam can provide most of the desired facilities. BrightBoard [CHI '96] uses a camera to monitor an ordinary whiteboard, making it work as an interface to a computer. In particular, it is very easy to record the writing on a whiteboard.
This project involves re-implementing the BrightBoard system to work on a standard PC using a cheap WebCam.
Examination marking by voting
It is hard to ensure that different question in an examination are of comparable difficulty. One solution might be to regard an examination as an election: the candidates in the examination are the candidates to be elected and the questions form the electorate. Each question ranks some of the candidates into an order (probably excluding some of the candidates and possibly ranking some of them equally). It is then necessary to combine these into an overall ranking that respects the rankings imposed by the questions as much as possible. This is rather like electing several people at once where the order in which they are elected reflects their popularity, as is the case with many transferable voting systems.
Given that single transferable voting is notoriously unrepresentative, it would be better to implement a fairer system for evaluating preferences. ID Hill in his paper on Some aspects of elections [Journal of the Royal Statistical Society A (1988), Volume 151, Part 2, pages 243-275] compares several approaches and shows how Condorcet's system of paired comparisons removes the worst anomalies. Further technical details (and sample implementations in Python) are available at the Election Methods Web site. This project involves implementing such a system for producing examination class lists.
Domestic speed camera
Motorists ignoring the 20 mph speed limit in Grange Road are a hazard for students travelling between the William Gates Building and the centre of town. The authorities appear to be unwilling to take action against them, so an informal "name and shame" policy might be helpful. Could a domestic cam-corder be used as a speed camera?
This project involves using a miniDV cam-corder to film the traffic on a road. The resulting recording would be transferred to a PC as an MPEG recording and then processed digitally. Successive frames could be compared and vehicles tracked. Given some calibration information, it should then be possible to calculate their speed. Pictures of the offending cars could then be published on the Web with information about the time, date, location and speed.
The project could be extended to extract the registration numbers of offending vehicles automatically.