Future Stuff over the last 15 years

Our current group members, projects and interests are well described under the SRG web pages Some other pastier projects we are about to or have just started are also described in a bit more detail.

Here is a list of things we like to pursue:

Building Communications Systems out of Society

Is the Internet, on balance, a good thing or a bad? Think spam and Snowden. Now moderate your answer.

Would the internet be different if just one of its founders had been a woman?

Why are groups of people more stupid than the stupidist person in the group?

We now have had several projects in this space, including Horizon, Social Nets, and Behaviour plus we have some work tracking social content and relating it to user generated content - e.g. crowdsourcing information that can be used for indexing, the evolution of item popularity, reliability of recommendations/reputations, using it to decide what to cache, what to evict, when to add and remove resources in service centers (even energy management).

This was always the sort of thing that Haggle (== Ad Hoc Google) is about, but see this more radical idea for using haggle to build a social autonomic immune system:

Xenopathology - Virtualising Epidemics

The fundamentally interesting thing about the haggle project is that that we are constructing communications networks out of society, whereas previosuly, people have built communications networks (roads, canals telephone, internet) and societies have emerged on them.

We have 3 papers on this, but they are under submission and to blinded conferences, so you have to ask me for copies in person only:( also, haggle stuff is described on sourceforge description of the software (in Java and also re-done in .net). but there was a talk about this at QMUL and Imperial which is briefly described at paperpicker and cs4fn

As a followup to this, we were discussing the use of epidemic and other bio-inspired models for data dissemination in haggle, the following idea suggests itself.

If we were to pick a large population (china, richer parts of india, south america) that has cell phones and run opportunistic communications software on the phones, we could write emulations of diseases. We could actually run virtual diseases across a population - assume that we have a model of a disease vector that requires a certain proximity and duration of contact (touch, breath, etc), the we can emulate that on haggle nodes. We could then run large scale experiments to investiage whether an epidemic wouldbe supercritical or subcritical -see work by Ganesh and Don Towlsey at Microsoft on this in worms in the internet - viz ganesh's work

meanwhile :

e.g. haemmoragic fever is subcritical normally as victims die too fast to infect others, whereas SARS was on the borderline - it required touch, rather than breath, which made the vector a little too weak, but its infectionsouness was high and life of victim nearly long enough. Indeed, until it reached Canada, the model (vector and fataility) was not well known and a near disaster was largely evaded through pure luck. Of interest to many epidemioilogists now is a "what if avian flu crosses over to human - what happens?"

I talked to the head of epidemiology of the UK's Health protection agency (who happens to be my sister:) about this and they said "damn right" - the models they use are naive, in the sense that SIR and SIS models assume homogenous populations and mobility

Data and analysis of even early haggle experiments includes all the hetergeneity we could wish for

This seems like this is somethign we have skills for in the UK (e.g. modeling and experiments - ) and I might do this as my sabbatical In fact, there are two things one can do with this, in fact:

1/ confirm epidemic patterns of real diseases by confirming contact patterns via haggle (e.g. on phones or imotes)Da - the HPA said they'd like to do an initial experiment with schoolchilden and iMotes.... as there are LOTs of epidemics of mild diseases in schools, and iMotes are easier to give to young kids than phones - one thing they'd like to do is analyse "mix zones" (where people belong to multipe cliques, in the terms of the paper above).

2/ the artificial/emulate disease (gedanken experiment) is very useful to "try out" what might happen for different versions of (say) human crossover of avian flu where the infectiousness (and fatality) models are not yet known, but one could then analyse results for various scenarious and propose various continigency prophylactic measures - typically, quarantine measures are crude, but if it turns out a disease has a very "hub" model, one could isolate fewer, but more "promiscuous" groups/cliques first - [See also work by Ross Anderson in this area - viz http://www.cl.cam.ac.uk/techreports/UCAM-CL-TR-637.html which discusses the negative side of "hubby" networks].

We have a lot of analysis of early haggle experimental data to show this - basically there are people you use preferentially to forward - these would be the same people you preferrentially isolate to cut down a disease/epidemic from being supercritical (see the infomcom paper on haggle too) to collapsing before it goes pandemic.

There are also non linearities that might be present in some diseases (positive feedback may be present - not sure - need to talk to some medics here) and you really really want to cut those down quickly..

Un-stitching Networks

The more I think about it, the more I am pursuaded that the net is not a graph, and that we don't want packets anymore. A net is a set of relationships, and there are various periods over which some members of those relationships communicate.

Ideas like packet switching (versus circuit switching, versus cell switching) get in the way of thinking in innovative ways about how a set of parties might communicate. Nowhere is this more obvious than in multihop multi-channel radio systems, where mimo, distributed antennae and coding, CDMA and network coding (even for unicast, but also for many-to-many) makes the model of a net as graph fairly wrong- the idea of interference as a physical spatial phenomenon completely undermines the idea of a collection of point-to-point links.

Now what if we apply yhe same ideas in all optical nets too? why not? (some pieces of the optical net might be free space, but also one can re-design the entire notion of a switch if one gets away from the idea of somethign that moves sprcific bits from specific input ports to specific output ports). One might gain very large advantages in resilience in a wide area optical net by redundent network coding of traffic from many-to-many, and the advantage of not having to "re-start" packets or circuits when there are "link" outages would easily offset the disadvantage of drop in utiisation because of redunendency overhead. This is increasingly important in today's applications.

At the same time, the notion of timescale of a session and bursts within a session being either nailed down as a sequence of packets, or as a "circuit" (virtual, lambda, or otherwise) seems to me to be too limited. We need something that does in-band description of flows that can be aggregates over time (and space - see network coding above) so that decisions on what to do at the next processing stages (intermediate nodes) can be made - but the same formats could be used to describe a flow, a circuit, a packet, an aggregate, and so on, and could include direct representation of information such as virtual provider and virtual end syetm, which could be progressively revealed (or masked) through proper use of encryption as such meta-data in the format progreses through the network.

I guess the idea is to generalise thigns to some sort of wide area space time code division multiplex, if one must be reductionist:)

A side effect of this (or is it a sideband) is that we want networks that are all photons - no more electrons - they are two small and there is always the risk of shocks.

Scaling the Internet along all dimensions - slightly older interests

The more I read about it, the more I think we can build the net out of a mesh of clusters connected directly by fiber, and a multihop radio access network. No routers. All network coded cosily together.

Scaling is a woolly term banded about by networking people (mea culpa) - what we mean is "complexity".

We have a system (e.g. the Internet) which has components (hosts, links, routers) and modules (protocols) and is growing, like many computing phenomena, exponentially. We'd like the complexity of the components to stay O(1), or at least to make economic sense, somewhere near or better than O(n) for n components or modules...

This might make quite a nice book or course.


Most interesting work here is on Highly Organised Tolerant systems - see especially John Doyle's work How to avoid breaking it?


Scale Free (small world) networks- The original power law observation was by the three Faloutsos brothers - since then, the idea has been observed in Web connectivity, in DDOS defenses, in IP routing evolution etc etc - a very good site to start at is Don Towsley's course on this at UMass. David Reed has some nice observations on Internet growth and Group Forming Networks. Can we use this prescriptively for protocol and system design?

Based on this, we are investigating coordinate systems where we embed the measurement of distances in an hypebolic rather than euclidean space. This leads to much better fit, when estimating distance to third parties. We'd like to use this in P2P, overlay routes, CDNs, Xenoserver node location (e.g. nearest node to place gameserver, furtherst place to put a disaster recovery backup copy etc) , and of course, in hybrid geographic-topological ad-hoc mobile routing algorithms.


Recent cool visualisation of the internet by one of my 1st year tutees from last year - it cites lots of other useful large scale measurement initiatives -the CSTB recommend that sometime soon, we try and capture _everything_ about the Internet for a whole day, as a Grand Challenge.


The Internet currently runs on adaption - see for the last word in sites on this - the so-called "Padhye equation" describes the principle behaviour of TCP - other (smoother) schemes are in the literature. Adapt or die (neal stephonson...), but how fast? adaptively?


Parekh first described the delay bound for a guaranteed service implemented via GPS (approximated via WFQ) in a switch/router of variable length, leaky bucket constrained flows. Better bounds and better implementations abound. Signaling is the missing piece... What about probabilistic delay - isn't that going to give us a better 95 centile (see above under adaption)?

Signal and Control

Control Planes better be flexible or they won't compete with IP...but what about programming languages for such systems (active control planes) - what is a) useful and b) safe and c) teachable?

Internet Scale Economics

pricing the internet, services, content etc, needs attention.

Scalable Security

Revenue from content (and networks) is under daily threat, from DDOS, copying without permission and intrusion. It behoves us to tackle this in ways that will work on a global scale - novel immunological approaches seem more promising with offers of highly adaptive defense rather than the "cast iron" approaches with catastrophic failures - a brief glance at code red analysis might support an implied biological imperative. Where are the IP T-cells? Recently we have build honeycombs and also my friends next door have a pretty neat end-system only virus containment system...watch this space.

Content is King, King is content

Freenet is a neat idea, building on napster and gnutella. But its not searchable in any efficient manner. Combining the power of p2p with the power of google seems like an interesting challenge. Now make it mobile. Now make it secure. Now go back and make sure you didn't lose privacy.


Huston, we have a problem, with route aggregation - why? because (allegedly) customers want faster failover and want to have some modicum of traffic engineering - both of these put stress on the path vector policy routing infrastructure (dare i use the word) that is BGP - what we need is routing research to address the problem properly. - Why have convergence? Why be globally consistent? A good enough route will do...how do i compute this? How do i pursuade providers and subscribers that good enough IS good enough (c.f. random trunk routing took a decade to be believed).

It seems obvious that we need some new paradigm that includes policy requirements and traffic conditions, but still scales- this involves a new seperation of timescale considerations (e.g. deployed infrastructure changes slowly, policy a little faster status changes more quickly, traffic faster still, finally packets and bits flow...). Control including information hiding must needs be managed properly, not as a side effect of obscurity.

Group Communications

I've been trying to solve this one since 1988. What if we revisit the hierarchical PIM, and hash content groups onto the hierarchy (c.f. ngc paper by aciri folks and unpublished ucl hpim spec)? Given the amount of radio out there, multicast seems quite useful - especially in emergencies (e.g. early warnings).

Mobility and non-pervasiveness - What is the Radio Button?

People talk about pervasive computing - what I am impressed by is the non-pervasiveness of RF and other wireless networks - we need to get back to the code way of thinking...south of the equator rsynching inconsistent systems (e.g. via patch heuristics and occasional human intervention) might be interesting... ...

The big problem with the likely case is that that it is NOT infrastructure free ("pure" MANET) nor is it just global WiFi or GSM/3G wireless access to the "core" wireline Internet. In reality, a lot of nodes are going to be 2 or 3 hops from "stuff". These hops will be unreliable, intermittent, and heterogeneous in extreme (bluetooth, infrared, 802.11, WiMax, GPRS etc etc) - using the diversity of these links is a bonus, not a burden, as they have uncorrelated problems. However, the network architecture to use intermittent connectivity is not somethign we have clearly elucidated yet. In Haggle (with Intel et al), we're looking at this - its a case of Delay Tolerant Networking, but we're doing a bottom up approach. We did some prior work on commuter routers (c.f. MAR) as well as handovers for nets (i.e. handover for a bus/train/plane-load of laptops) rather than hosts. Deal with the corner cases and maybe the straight edges will take care of themselves.

As part of this, we are looking at mobility models (people walk and drive in funny routes, and do not have random destiantions or velocities), and would love to de-mystify RF propagation - a pervasive computing system could probably _document_ the RF map of the world and publish it (continuously).

The Need for Speed

The core internet components now stress electronics, and are moving to optical - some might say this moves us out of Computing into EE and physics. I'd like to move optical back into computing via "Turing Switching" (more later).

The edge is moving from PSTN to wireless. This is so slow as to stress protocols - information theory is of great benefit here in optimising these protocols to work well in barren environments.

Kwiditch (name changed to avoid (TM))

I'd like to get hold of about 35 Kyosho model radio control helicopters. The goal is to build a system (keyword) that is controlled by a set of humans on the ground. The interface should be as natural as possible (gesture/movement based). The aim is to provide remote control together with on board control systems for stability, so that players can play an interesting 3D flying game - the recent helium balloon seen in the Street might serve for example as a ball.

There are some nice ad hoc radio problems in this since most the radio control systems have insufficient channels. Also, it might be neat to design automatic collision avoidance. etc etc etc

Most recently I was contacted by people interested in Kite Organs (see google for some possible fascinating hits on these) but more specifically, they pointed ou that Stockhausen once wrote a String Quartet for Helicopters which was once performanced by the Arditti Quartet conducted by Boulez, I believe!

future internet manifesto

the past is ever present