Department of Computer Science and Technology

Systems Research Group – NetOS

Student Projects (2013—2014)

NetOS

This page collects together various Part II project suggestions from the Network and Operating Systems part of the Systems Research Group. In all cases there is a contact e-mail address given; please get in touch if you want more information about the project.

Under construction: Please keep checking back, as more ideas will hopefully be added to this page during the coming weeks.

Note: there are a number of stumbling blocks in Part II project selection, proposal generation and execution. Some useful guidance from a CST alumnus (and current NetOS PhD student) is here.

Current project suggestions

Delay Tolerant Scavenger Transport

Contact: Arjuna Sathiaseelan(with Toby Moncaster)

Delay-tolerant networking (DTN) [1] has gained significant interest within the wider Internet research community. DTN uses the Bundle Protocol (BP) [2] to store and forward data as bundles. This enables reliable transfer of data over challenged networks that have high bit-error rates, long delays and /or frequent disconnections. BP is an application layer protocol and uses the underlying transport protocols (TCP, UDP or LTP) to transport the bundles between bundle nodes (an entity that can send/receive bundles).

In this project, the student is expected to develop a Delay-tolerant Scavenger transport where the BP uses a scavenger transport method like LEDBAT [3] instead of traditional TCP. It is envisaged that such a transport method would enable new access methods such as low-cost access to the Internet [4] or moving large amounts of content without hampering the performance of competing traffic.

The student is expected to extend the current DTN implementation (http://www.dtnrg.org/wiki/) to use LEDBAT and conduct extensive testing using testbeds.

References
[1] Kevin Fall, A Delay-Tolerant Network Architecture for Challenged Internets, ACM SIGCOMM, 2003.
[2] K. Scott, S, Burleigh, Bundle Protocol Specification, IETF RFC 5050, http://tools.ietf.org/html/rfc5050.
[3] S. Shalunov, G. Hazel, J. Iyengar, M. Kuehlewind, Low Extra Delay Background Transport (LEDBAT), IETF RFC 6817.
[4] A. Sathiaseelan, J. Crowcroft, LCD-Net: lowest cost denominator networking, ACM SIGCOMM CCR Vol. 43, No. 2, pp. 52-57, April 2013.

Pre-requisites: Good C/C++ programming skills, good script writing skills, understand networking.


A compiler for task-parallel computations at multiple scales

Contact: Malte Schwarzkopf

Task-parallelism has become a popular paradigm for writing large-scale distributed programs, e.g. MapReduce computations (used for data processing by Google and in many "cloud" settings), Dryad's task graphs, or in the SRG's Ciel system. However, previous systems assume a single, fixed-size task granularity. With data centres of multi-core machines, a single granularity may not work well: if tasks run opn the same machine, communication overheads are low and thus smaller size tasks can be used; at the same time, when running distributedly and communicating across the network, communication costs are high and longer-running tasks are required in order to amortize the overhead.
The goal of this project would be to write a compiler (e.g. using the LLVM framework) that can take a single implementation of a task-parallel algorithm and break it up into sets of tasks at different granularities. The resulting tasks can then be executed using either Ciel, or the SRG's new Firmament execution engine.
An optional extension would look at scheduling algorithms for a multi-scale setup, and at support for run-time "splitting" of tasks.

Difficulty: ★★ to ★★★★ (depends on scope of the project and level of ambition)

Pre-requisites: Good C/C++ programming skills, interest in compiler construction (Part IB course sufficient).


Network packet trace analysis and visualization at scale

Contact: Malte Schwarzkopf and Matthew P. Grosvenor

As part of the R2D2 network architecture project, we have established a 10 Gbit/s packet capture setup, i..e. we can capture 10G traffic at line rate. The trace files produced by this setup contain detailed packet-level information from one or several tapping points. Obviously, the traces get very big (several Gigabytes for a few seconds of tracing), and traditional plotting tools are not well-suited to analyzing and visualizing them.

This project would involve developing a new analysis and visualization framework for the packet traces. This is an interesting challenge, as clever algorithms and memory-efficient data structures are required to be able to (i) process traces quickly (so that interactive analysis is possible), (iii) visualize them in such a way that they can be analyzed effectively,and (ii) do so without requiring inordinate amounts of memory.

Since R2D2 is on-going research work, this project is an exciting opportunity to work directly with networking researchers, and to make a contribution to a major network architecture undertaking. The developed tool (and its beta versions) will find direct use in analysis of real data sets.

Pre-requisites: This project requires familiarity with the concepts taught in the Part IB Networking course as well as the various courses covering data structures and algorithms. Familiarity with existing visualization toolkits (gnuplot, matplotlib etc.) is helpful, but not required. Implementation technique and language are open to personal preference.

Difficulty: ★★★ to ★★★★ (depends on level of ambition)


Previous project suggestions (may still be available)

Eternal Sunshine of a spotless Facebook

Contact: Jon Crowcroft

This is about implementing forgetting for Facebook (or the Social Network of your choice).

Basically, Robin Dunbar's work shows that even when you think you have many more friends online, your real friends number about the same in cyberspace as in Real Life, when you account for ones that you actually communicate with (write on wall, etc). This number is typically around 150.

This project would build a system that is based on Dunbar's work, and automates forgetting (as opposed to unfriending). Crucially, it would also use his work to determine privacy settings for your "circle of friends", based on the work that shows you organise your social group in circles, with kith and kin nearest, then friends, then colleagues, then acquatances and so on. Using results from location based checkin (e.g. via Foursquare) we can auto-classify users on facebook, and set privacy appropriately. Testing could be done against local users or against data from the myPersonality project.


Elgoog

Contact: Jon Crowcroft

The proposed topic is to develop a tool for assessing a web page to determine to what extent it possesses the characteristics which would lead Google to return it on the first page of Google search returns, and if not to determine the changes should be made to the page to achieve this. This topic is both intellectually challenging, and potentially of great practical value.

The algorithm which Google uses to rank its search returns is believed to contain more than 200 factors, but it is not known (except in a few cases) what these factors are nor is it known how they are weighted.

Webmasters find that their websites may rank very well in Google, being returned in the first page for relevant search phrases, or may rank very badly. And these rankings can fluctuate wildly over time. It is often difficult to understand why this is occurring. As a result there is much speculation and anecdote on webmaster discussion forums, with webmasters exchanging ideas on why their website may have suddenly been promoted or demoted in the Google search rankings.

The aim of the proposed research is to bring a more scientific analysis to this problem, by developing a Ranking Algorithm which, when fed with the key characteristics of web pages, can predict with some degree of accuracy how they will be ranked by Google. Such an algorithm will provide objective and scientific guidance as to what webmasters need to do in order to produce websites which Google will regard as of high quality, and will rank highly. Better quality websites will help webmasters, by lifting their websites in the Google rankings; they will help users; and they will help Google, by improving the quality of websites and hence the quality of the experience of Google users.

Contact details of the proposer are:

Dr. Alex Reid, 
27 Millington Road, 
Cambridge CB3 9HW

Tel: 01223 319733. Email: [Javascript required]

As exploratory work, Dr Reid has undertaken an extremely basic exercise, developing by trial and error a crude formula to predict Google rankings. Referred to as Project 1, this is described in Annex B.

Next steps

Although Project 1 had some success in predicting Google rankings, it falls far short of what is required. Specifically, it failed to predict the performance of pages in the Abacus Construction Index USA website. These score highly in terms of the Project 1 formula, but do not appear on the first page of Google search returns. There are clearly other important factors at work which have not been captured by the Project 1 formula.

It is envisaged that the first step would be to brainstorm all the possible characteristics of a web page which might be expected to affect its Google rankings, and to assess which of these is measurable. Characteristics could be of various kinds, for example to do with the content of the page (eg number of words, frequency of keywords, quality of grammar, quality of code, frequency of updating etc), number and quality of inward and outward hyperlinks (in respect of the web page and the whole website), geography, and user behaviour (eg average time spent on the web page and on the website).

Characteristics could be measured for say the first 100 or first 1000 web pages ranked by Google for particular search terms. These data could then be used to develop and test a Ranking Algorithm which mimics so far as possible the actual Google rankings. This would, in a sense, be a reverse engineering of the Google algorithm. Because the Google algorithm changes every few weeks, the Ranking Algorithm would need to be revised and re-calibrated on a regular basis.

For practical purposes there would be no need for the Ranking Algorithm to predict the precise order in which Google would rank its search returns. What is needed is general guidance as to the characteristics of a web page which will help or hinder its performance in Google rankings.

Co-operation with researcher

Dr Reid would be very happy to co-operate with the researcher, by sharing his practical experience of the Google ranking of the Extonet websites. Specifically, he would be happy for one or more of the Extonet websites to be used as guinea pigs for test purposes in the research.

Additional information: see here


Bitcoin for Signposts

Contact: Jon Crowcroft

This is what it says on the tin - see bitcoin for a virtual currency and this signposts talk for info about the context.

Signposts is a new mechanism for re-gaining connectivity between a cloud of personal devices that may be seperated by firewalls and NATs and other obstacles to personal communication. Signposts uses all kinds of tricks to get connecvity, as a side effect of name-lookup for a device (jon.work-desktopsignpos.st looksip jon.home-ipad.crowcroft.signpos.st so as to synch work to home before I leave work). The system provides security, and relies on a minimal service "in the cloud", whcih is the signpost.

However, one can imagine that one could decentralise running the signposts (in a peer-to-peer manner) but one might need a payment system to incentivise non-friends or unknown people to provide the signpost service. Hence one might employ bitcoin, which is a decentralised currency, for philosophical reasons.


Video Free Video conferencing for Android Phones

Contact: Jon Crowcroft

This project would implement a video conferencing application for android phones which would not tranmsit any video. Instead, a wire frame model and text mapping/rendering of the speakers face would be shown to the listener - that way, during a normal phone call, the speec could be used to drive a sequence of simple morphs to the lips to make it look liek the speaker was sending you live video - obviously you'd need to bootstrap the system by getting wireframe models and some stills of the speaker (some already on people's facebook pages, available via contacts) - and the procesing to deal with the digital speech extraction of various importnat sounds that correspond to plosives etc to choose the right next shape to make is quite intense.

Optionally, one could animate a background that is appropriate to the background sounds (traffic, wind in trees etc). An interesting part of the evaluation might be to see if this is better than a poor real video stream in terms of user experience, and also to try what happens animating the wrong persons head/face or background:)


Implement Lazy Susan

Contact: Jon Crowcroft

Lazy Susan is a distributed denial of service prevention system, described in this Tech Report which uses shared secrets and delays as "proof of work". It would be nice to build and evaluate it. One can envisage an implementation being done purely at the HTTP level, with apache server side plugins and some fairly simple proxy code to run in a front end (eventually destined for an OpenWRT router for example) to rate limit attempts to reach sites which have not issued permits yet. This might also be linked with signposts (see project idea above) as a prevention of DDoS attacks on a given signpost in the cloud..


Code Editing in Local Style

Contact: David Chisnall

Most large projects have their own set of style guidelines, controlling things like indentation (tabs, spaces, tabs for indent and spaces for alignment) and braces style, where variables are declared (e.g. at point of first use versus the start of the function) and certain naming conventions (e.g. camel case, underscore separated, initial capitals for globals). When working with multiple projects, switching between the styles can be difficult. Ideally, a user would be able to check out the code, translate it into their own preferred style, edit it, and then commit it upstream in the project's style.

This project aims to use libclang to accomplish this. The first step is an automatic indent tool. This should be a more semantically-aware version of the standard UNIX tool, built using libclang. Initially, it should require an explicit style definition, but as an extension it should be able to infer a style from an example (and point to contradictions).

The other part of the puzzle involves a semantically-aware diff, so that only new parts of the code are pushed upstream. Editing code in this mode should not change any existing code, so before committing the tool must revert to the old style, including all style violations in parts of the code that were not modified.

The criteria for success would be editing, for example, LLVM and FreeBSD source code files in the same local style.

Prerequisites: Knowledge of C, understanding of coding conventions, familiarity with revision control

Further Reading


Type Inference for LanguageKit / Pragmatic Smalltalk

Contact: David Chisnall

LanguageKit is a framework for implementing dynamic object-oriented languages using the same underlying object model as Objective-C, using the GNUstep Objective-C runtime. The compiler uses LLVM for final code generation and optimisation, but currently misses a number of relatively simple optimisation opportunities.

The most obvious omission is a lack of type inference. For example, when a floating point value is passed into a method, it boxes it, and then unboxes it for every operation. For simple floating point operations, such as multiply or add, this boxing and unboxing takes far longer than the operation. The same happens with structures, which are wrapped in objects even when they are only ever passed to a method or function directly, without modification.

This project should add some LLVM optimisation passes that use the flow control information from LLVM to determine when redundant operations of this nature are happening and remove them.

The criteria for success would include a number of benchmarks showing common missed optimisation opportunities for LanguageKit now, running faster with the new optimisations.

Prerequisites: Knowledge of C++, ideally some of Objective-C and/or Smalltalk. Understanding of basic compiler concepts, such as SSA form. Prior experience with LLVM would be helpful, but not essential.

Further Reading


Objective-C on the Web

Contact: David Chisnall

The Étoilé project includes a proof-of-concept Objective-C to JavaScript translator with a minimal set of JavaScript support libraries allowing trivial test programs to run. This project would extend this with a richer set of libraries, reusing GNUstep code where possible and wrapping JavaScript objects where appropriate, to allow view objects written in Objective-C to run in the browser and communicate with model objects on the server.

This will also involve implementing a distributed-objects-over-WebSocket implementation, most likely be passing JSON segments from the client to the server and translating them into NSInvocation instances on the server. It should also involve implementing the basic drawing classes in terms of the JavaScript APIs for the canvas tag.

If completed correctly, it should be possible to take a view class written in Objective-C, compile it to JavaScript, serve it to a client, and have it use a delegate object running in Objective-C on the server.

Prerequisites: Knowledge of JavaScript and modern web programming APIs. Some understanding of Objective-C.

Further Reading


Simulated Annealing for LLVM Optimisation Selection

Contact: David Chisnall

LLVM runs a set of modular optimisation passes as the standard group. These are selected by someone who thought they seemed sensible, but no one has tested that they are the optimal configuration. Simulated annealing is an approach for approximating optimal solutions to problems with unfeasibly large search spaces, by starting with a random configuration and permuting it, testing whether the change is an improvement. If it is an improvement, then it is always accepted. If not, then it is accepted with a probability that decreases over the runtime of the attempt. This second step helps avoid local optima that are globally sub-optimal.

There are three basic operations: insert a pass, remove a pass, or swap the position of two passes in the chain. After each step, the new sequence should be used to optimise some test programs and run them, measuring code size and performance.

A successful project should show some improvement in performance, code size, or both, over the standard ordering of optimisation passes.

Prerequisites: Basic knowledge of C++, shell scripting for evaluation.

Further Reading


Capsicum Support in GNUstep

Contact: David Chisnall

GNUstep is an implementation of the APIs from the OpenStep specification and later from Apple's Cocoa. This project will involve modifying the framework to make use of Capsicum, a simple set of capability APIs for FreeBSD. There are several steps:

  • Modify NSWorkspace to read a plist describing required services from an application or tool's property list and ensure that they have access to them, then call cap_enter before spawning. This part requires rtld to support sandboxed-on-launch code. If this support is not finished, then a first approximation can be achieved by calling cap_enter() from NSApplicationMain().
  • Implement a file chooser service that will pass file and directory descriptors to applications that try to use the standard open / save dialogs.
  • Ensure that the distributed notification centre and pasteboard server connections are open before entering the sandbox.

At this stage, applications should be useable with capsicum with some small modifications. There are several additional small changes that could make the porting easier:

  • Modify NSFileWrapper to use openat() and friends.
  • Create an NSString subclass that carries a file descriptor for a base path as an instance variable and allows all of the standard path modification operations to work relative to this.
  • Maintain a dictionary mapping paths to directory descriptors, so that attempts to open a file with NSFileHandle or NSFileWrapper will use openat() via this mechanism.

Evaluation should include some example applications running in sandboxes, a comparison with the MAC-based sandboxing on OS X, and ideally an example of using Distributed Objects to implement privilege separation in an existing application.

Prerequisites: Knowledge of Objective-C, UNIX programming.

Further Reading


CHERI Garbage Collection

Contact: David Chisnall

This project depends on an ongoing research project and will involve working with the research team. It is probably best suited to a masters student.

The CHERI (Capability Enhanced RISC Instructions) architecture is a MIPS-based softcore containing a capability coprocessor. Capabilities, in this setting, are fat pointers that contain a base, a range, and a set of permissions and flags. The architecture provides tagged memory, distinguishing between memory storing capabilities and memory containing only data. All memory accesses must go via a capability and so interior pointers stored in normal memory are just offsets.

This project should implement a garbage collector that takes advantage of the ability to accurately identify memory. There are a lot of potential activities within this project. The simplest would be a single-scan for address space reclamation. In typical use, C code on CHERI will never reuse address space. A 64-bit virtual address space provides a lot of scope for this, but it will eventually be exhausted, especially for sandboxed libraries that are in a smaller address space within a larger system. Being able to identify address ranges that have no capabilities pointing to them is useful for this, allowing safe address space reuse. This project depends on an ongoing research project and will involve working with the research team. It is probably best suited to a masters student.

The CHERI (Capability Enhanced RISC Instructions) architecture is a MIPS-based softcore containing a capability coprocessor. Capabilities, in this setting, are fat pointers that contain a base, a range, and a set of permissions and flags. The architecture provides tagged memory, distinguishing between memory storing capabilities and memory containing only data. All memory accesses must go via a capability and so interior pointers stored in normal memory are just offsets.

This project should implement a garbage collector that takes advantage of the ability to accurately identify memory. There are a lot of potential activities within this project. The simplest would be a single-scan for address space reclamation. In typical use, C code on CHERI will never reuse address space. A 64-bit virtual address space provides a lot of scope for this, but it will eventually be exhausted, especially for sandboxed libraries that are in a smaller address space within a larger system. Being able to identify address ranges that have no capabilities pointing to them is useful for this, allowing safe address space reuse.

A more complete solution would implement a garbage collector that could identify the complete set of reachable memory. Again, this could initially be a stop-the-world mark-and-sweep collector (stopping the world for a mark, resuming for a sweep) and then

Possible extensions include:

  • Integration with the host OS to cache the locations of pointers from swapped regions, so they don't have to be paged back in, and to identify mapped regions of virtual address space that may contain pointers.
  • Copying collection, where capabilities for an object are all identified, all marked as read-only, the object copied, and then the capability
  • Generational collection, where capabilities for older objects are marked read-only (immutable) so that they do not have to be re-scanned. The trap handler should move them into a younger generation if they are modified.

Prerequisites: Knowledge of C, graph theory, virtual memory, MIPS assembly.

Further Reading


Creating a High-speed TCP Traffic Generator

Contact: Toby Moncaster

Data centres are a key part of the current Internet, from Google to Facebook, Amazon to Microsoft, they provide network services, search, cloud computing and data storage. Within the lab we are hoping to create an emulator capable of modelling such a system. A key part of this will be the ability to generate high-speed TCP traffic. Commercial traffic generators are available, but they are extremely expensive.

The NetFPGA platform provides an interesting alternative. NetFPGA is a fully programmable 4-port Ethernet card running at either 1Gbps or 10Gbps. By coupling this with appropriate control logic and a suitable traffic model (or pcap file) it can be used as a high speed traffic generator. While a 1Gbps traffic generator is useful it would be even better to have a 10Gbps one. Ultimately the ideal would be to have a card capable of rivalling commercial traffic generators.

Special Resources Required:

  • Two 1G NetFPGAs (during first term to test existing traffic generator)
  • Two 10GbE NetFPGAs
  • Three gigabytes of PWF storage space
  • Account on CUCL, NetFPGA, NetOS groups
  • Account on MPhil machines in SW02
  • Access to rooms SW02, SE18

Pre-requisites: Working knoweldge of networking (e.g. CompNet course), Good marks from part IB HW tick, Understanding of programming including scripts (ideally Python)


Creating a 10Gb Packet Capture Card

Contact: Toby Moncaster

Measuring network traffic is crucial if one is to ever fully understand the performance of the network. Programs such as tcpdump allow you to capture your local traffic, but they run in user space and are unable to run at the line-rate of modern high-speed Ethernet cards. Various solutions exist but they usually require expensive add-on cards such as Endace's DAG cards. The problem is that at 10Gbps, a TCP header packet (including its IP encapsulation and Ethernet header) takes just 43ns - capturing data reliably at this rate is extremely hard.

The NetFPGA platform offers a cheap alternative to specialist hardware such as DAG cards. NetFPGA is a fully programmable 4 port Ethernet card able to operate at either 1Gbps or 10Gbps. The basic traffic generator module for the NetFPGA 1G includes packet capture capabilities. However this is only basic and involves sending the packets to the software stack for processing introducing timing inaccuracies and running into potential problems with the PCI bandwidth in the host machine.

Special Resources Required:

  • One 1G NetFPGAs (during first term to test existing solutions)
  • Two 10GbE NetFPGAs
  • Three gigabytes of PWF storage space
  • Account on CUCL, NetFPGA, NetOS groups
  • Account on MPhil machines in SW02
  • Access to rooms SW02, SE18

Pre-requisites: Working knoweldge of networking (e.g. CompNet course), Good marks from part IB HW tick, Understanding of programming including scripts (ideally Python)


More Systems Projects at the DTG Project Page

Contact: Rip Sohan

The DTG has a number of interesting systems projects in the following systems areas: