Computer Laboratory

Student projects (Part II and ACS)

Project Ideas

N.B.: I am unable to take any Part II students on for projects in the academic year 2014/15, as I will be leaving Cambridge. However, the below suggestions might still offer project ideas that other supervisors may be interested in supervising. Good luck!

Below are some ideas for potential Part II projects that I am interested in supervising. Note however that I am also happy to develop a new idea together with interested students (and I do, in fact, encourage this). It's your project, and you should really want to do it (also, trust me, being passionate about your project helps you through motivation lows)! Please get in touch if you would like to discuss a potential project idea – or even if you only know a rough area that you are interested in. Note that my own research is in systems and networks, and I prefer to supervise projects that align with my interests.

Current project ideas

A compiler for task-parallel computations at multiple scales

Task-parallelism has become a popular paradigm for writing large-scale distributed programs, e.g. MapReduce computations (used for data processing by Google and in many "cloud" settings), Dryad's task graphs, or in the SRG's Ciel system. However, previous systems assume a single, fixed-size task granularity. With data centres of multi-core machines, a single granularity may not work well: if tasks run opn the same machine, communication overheads are low and thus smaller size tasks can be used; at the same time, when running distributedly and communicating across the network, communication costs are high and longer-running tasks are required in order to amortize the overhead.

The goal of this project would be to write a compiler (e.g. using the LLVM framework) that can take a single implementation of a task-parallel algorithm and break it up into sets of tasks at different granularities. The resulting tasks can then be executed using either Ciel, or the SRG's new Firmament execution engine.

An optional extension would look at scheduling algorithms for a multi-scale setup, and at support for run-time "splitting" of tasks.

Difficulty: ★★ to ★★★★ (depends on scope of the project and level of ambition)

Network packet trace analysis and visualization at scale

As part of the R2D2 network architecture project, we have established a 10 Gbit/s packet capture setup, i..e. we can capture 10G traffic at line rate. The trace files produced by this setup contain detailed packet-level information from one or several tapping points. Obviously, the traces get very big (several Gigabytes for a few seconds of tracing), and traditional plotting tools are not well-suited to analyzing and visualizing them.

This project would involve developing a new analysis and visualization framework for the packet traces. This is an interesting challenge, as clever algorithms and memory-efficient data structures are required to be able to (i) process traces quickly (so that interactive analysis is possible), (iii) visualize them in such a way that they can be analyzed effectively,and (ii) do so without requiring inordinate amounts of memory.

Since R2D2 is on-going research work, this project is an exciting opportunity to work directly with networking researchers, and to make a contribution to a major network architecture undertaking. The developed tool (and its beta versions) will find direct use in analysis of real data sets.

N.B.: This project requires familiarity with the concepts taught in the Part IB Networking course as well as the various courses covering data structures and algorithms. Familiarity with existing visualization toolkits (gnuplot, matplotlib etc.) is helpful, but not required. Implementation technique and language are open to personal preference.

Difficulty: ★★★ to ★★★★ (depends on level of ambition)

Cluster-in-a-box: task-parallel computing on many-core machines

The SRG has access to an experimental machine with hundreds of ARM cores, as well as an Intel Single-Chip Cloud (with 48 Pentium-era cores on a single chip). Both of these architectures enable massively parallel computation in an environment that is a lot like a distributed cluster, except with some shared resources.

This project would first measure the properties of the architecture using e.g. the SRG's ipc-bench tool (or a tweaked version of it), and then look at implementing support for task-parallel computation at the operating system level, i.e. providing low-level abstractions for task lifecycle management and data motion on top of the custom communication infrastructure within the box.

Task-parallelism is become a popular paradigm for writing large-scale distributed programs, e.g. MapReduce computations (used for data processing by Google and in many "cloud" settings), Dryad's task graphs, or in the SRG's Ciel system.

The implementation would interact with the SRG's new Firmament parallel computation engine, and has lots of scope for interesting low-level hackery.

However, within the scope of the project, you could also look at the energy consumption of such a server compared to a distributed cluster, and at scheduling techniques that place tasks such that they efficiently use shared resources (e.g. CPU caches, shared memory).

N.B.: This project draws heavily on courses such as Part IA Operating Systems, Part IB Concurrent and Distributed Systems and Part IB Networking, so interest and familiarity in these areas are useful. Good working knowledge of the C/C++ languages and a Linux build environment, or the willingness and ability to learn those quickly, is also highly desirable.

Difficulty: ★★★ to ★★★★ (depends on scope of the project and level of ambition)

Previous (old) project ideas

N.B.: Please do feel free to use these project ideas; I am not myself interesting in supervising them at the present time.

Crowd Computing on mobile phones

Some people (including myself) in the SRG are currently working on Skywriting, a Turing-powerful cluster computing framework for cloud computing. Skywriting is written in Python and has been ported to the Android operating system, allowing us to run mobile phones as Skywriting workers.
This project would extend Skywriting to enable delay-tolerant computation, i.e. computation that is scheduled on a mobile phone and which returns a result at some (potentially undefined, potentially weakly bounded) point the future. For example, the phone could compute only while it is being charged and thus connected to mains power. Furthermore, the bandwidth on 3G networks is limited, so the phone could wait until it has a fast network connection before transferring input and output data (an ambitious student could even look into ad-hoc networks between multiple Skywriting-enabled phones to epidemically spread input and result data).
A student attempting this project should be familiar with Python and not afraid of working with research code that is in active development. The evaluation of this project would probably entail some simulations and a small practical evaluation with some phones in the lab.
Difficulty: ★★

Encrypted Facebook

Many people have recently become increasingly worried about Facebook taking a rather relaxed attitude towards the privacy of personal data. However, attempts at building more secure social networks with technical solutions that ensure data privacy, such as encryption, have not enjoyed much success because Facebook captialises on the "network effect" of everyone else using it.
It would be very useful if we could still use Facebook, but encrypt all the data stored there, enabling only friends who also use the same browser extension to see the cleartext (or the unscrambled photos, etc.). There are various directions one could take with this project: you could make it so that only messages exchanged with friends who also use the extension are encrypted, or so that everything is encrypted. To implement the plugin, you could use a toolkit such as Greasemonkey, or implement it entirely yourself.
Difficulty: ★ to ★★