The Systems Research Group
Networks and Operating Systems
Part II/Diploma Project Suggestions, 1998-1999
This project involves the design and implementation of a distributed game
which can support widely dispersed players. It could investigate the trade-offs
that exist between various aspects of the game:
Physically Distributed Games
A specific example of an interesting area is the extent to which consistency
can be sacrificed when players cannot see one anothers screens. In this
situation it is only necessary for the clients to agree on certain key
facts -- for exmple whether a shot fired by one player will hit another.
Consistency -- the extent to which each player sees the game progressing
in the same way.
Resiliance -- for example to packet loss or network partitioning.
Security -- perhaps against technically competetent cheating players.
Pre-fetching Web Cache
Whilst there are many web caches out in the big, wide world, they all rely
on data that has already been accessed being accessed again. It would be
nice if, rather than caching old data in this manner, one could have a
cache which ``pre-fetched'' data, in a manner similar to many HD controllers.
For example, when one visited a web-page, it might be worth beginning to
download those pages linked to by that page, in order that the ``idle''
time, whilst the user is digesting the data, is not wasted. In practice
one would have to be more clever than this, since some links may point
at dead, or highly time-sensitive links, or there may simply be too many
links to begin to download them in the background (from a network traffic
point-of-view). Similarly, the interaction with the browser's cache (in
terms of some of a child pages' links pointing to the same data as the
parent's links) could be optimized. This type of cache could be implemented
in either the client or the server.
The purpose of an IP firewall is to restrict the flow of traffic between
two networks according to some policy. Traditionally this has meant blocking
traffic destined for certain IP address or UDP/TCP port ranges. With the
advent of charging on a "per byte" basis, there is demand for a firewall-like
device to not only be able to block certain streams entirely, but
also limit the throughput of other streams according to some policy. This
sort of device could also be used to provide limited differentiated service
between "IP flows" to a network. Consider the case where a small business
whishes IP traffic from its web server have priority over Quake traffic
from company employees. In the first instance this project would involve
building a QoS Firewall using a commodity PC with one ethernet card, routing
to an overlayed IP subnet. Linux already has many of these capabilities,
but it would be interesting to build the system using cheap hardware, such
as a "Shark" network computer. The project can then diverge to include:
UI design. A Web based management interface?
QoS Bridge. Avoid external routing updates when this box is introduced.
QoS Aware Web Server
Much work has been carried out in the Computer Laboratory and elsewhere
to provide service guarantees both on the network and in the operating
system (Nemesis, Rialto, Resource Kernels...) for CPU and Disk. It would
be interesting to develop a web server that could take advantage of these
guarantees to provide guaranteed levels of service to visitors. As a simple
example, consider a commercial web site, such as www.cnn.com that is visited
by many people, some of who would like to pay for preferential service.
This would be a very hard project with many open decisions. If you know
what RTSP is, then this project is for you.
Adaptive Web Serving
When you visit many web sites, the first thing that they ask is "Do you
have a high speed or low speed connection to the internet?". This is fine
if we assume the bottleneck is close to the recipient, and he is clued
up enough to know what it is, but what if the bottleneck is somewhere in
the middle? A solution is to put more "smarts" into the web server. It
would be possible to contruct a web server that "learns" how good the connection
is to browsers, and can adapt content accordingly. This project would involve
the development of a web server, and a description language for the adaptation.
Advanced work could go on to integrate layered image schemes into the system
to reduce the amount of storage required on the server, and to do away
with the need for multiple copies of all pages/images to be kept.
A prototype implementation for an ML-like language:
fn f => fn x => (fn y => (fn x => 0) (f x)) = (fn x => 0);
Proving Correctness of
A common problem with today's software is that its increasing complexity
is not well served by `legacy' languages, such as `C', having no support
for garbage collection, and so on. One solution to these problems of uncontrolled
instability of software written in languages like `C' would be to write
a parser for a compiler, such as gcc which took the output of a front-end
parser, which had been checked for syntactic correctness, and then checked
this for semantic correctness. This would involve quite a large amount
of language and compiler work, but should prove interesting. The project
could be extended to other languages, such as Pascal, Modula-3, Java, and
Project enquiries to:
Austin Donnelly, Cambridge Computer Laboratory, Austin.Donnelly@cl.cam.ac.uk
Stephen Early, Cambridge Computer Laboratory, Stephen.Early@cl.cam.ac.uk
Dickon Reed, Cambridge Computer Laboratory, Dickon.Reed@cl.cam.ac.uk
HTML gripes to:
Richard Mortier, Cambridge Computer Laboratory, Richard.Mortier@cl.cam.ac.uk
Last updated: $Date: 1998/09/14 20:25:50 $