Sir Maurice Wilkes
It is clear that we are within sight of the point at which further shrinkage of CMOS circuits will become
impossible for physical reasons. This point will be signaled by sharply increasing costs.
I shall comment on how Moore's law is seen by the Semiconductor Industries which is responsible for the
publication of the International Technology Roadmap for Semiconductors, and on the problems that will
eventually bring to an end the regular doubling of the number of components that can be put on a chip.
Further advances will surely take place, but it is hard to figure out what they will be or when they will
take place. I shall comment on the outlook as I see it, but I lay no claim to special insights.
Internet congestion control can be viewed as a resource allocation mechanism implemented via a distributed
computation by end-systems. It provides a concrete, measurable example of an economic tatonnement process,
with TCP's congestion avoidance algorithms playing the role of a `Walrasian auctioneer' searching for market
clearing allocations. What are the theoretical limitations upon what can be achieved by such a distributed computation?
In this talk we review recent progress, and discuss the constraints imposed by: limited information available
to end-systems; the finite speed of light; inherent randomness; and incentive compatibility.
A major aim of giving semantics to imperative and object-oriented programming languages
in recent years has been to support the information hiding that goes on in such languages,
in particular, relational reasoning principles for showing the correctness of data representations
also referred to as "data refinement"). The situation for programs with local variables is by now
well-understood. But, similar semantics for programs that deal with heap data structures and pointers
has proved elusive. In this talk, we report on a first semantic model for languages with heap data structures
that validates data refinement. The model is built using a general framework for information hiding
based on relational parametricity. The novelty is in finding appropriate relational correspondences that account
for information leakage or "extrusion" that becomes possible with pointer data.
The mass adoption of embedded computing devices (mobile phones, PDAs, smartcards, etc) is moving us rapidly into
the ubiquitous computing age. If these devices are to be a boon rather than a bane then robustness is critical.
Security will be increasingly important, not only for traditional roles like payment mechanisms and access control,
but also for peer to peer transactions and new business structures.
Smartcards are an early embodiment of consumer security devices. They present a harder target for the criminal underworld
than their magnetic strip counterparts. However, for several years now it has been know that microprocessors can leak a
lot of useful information through power and electromagnetic emissions. These emissions (often referred to as "side channels")
are characteristic of conventional clocked digital circuit designs. Fault injection techniques have also been used to trick
devices into fault modes which leak additional information.
As part of an EU funded project (G3Card) we have been collaborating with industrial and academic partners to develop
technologies for the 3rd Generation of Smartcards. In Cambridge we have played both black hat and white hat roles so
that we can evaluate what we have designed in much the same way that a good locksmith must also understand how to be a
good lock pick. This lecture will review our design strategies, from concept to VLSI implementation. Results will be
presented from formal verification of components to bench experiments on naked chips.
Brendan Murphy Award Lecture
We address the problem of how to handle exceptions in distributed object systems. In a
distributed computing environment exceptions may be raised simultaneously in different
processing nodes and need to be treated in a coordinated manner. Mishandling concurrent
exceptions can lead to catastrophic consequences. We take two kinds of concurrency into
account, arising from: (i) several objects that are designed collectively and invoked
concurrently to achieve a global goal, and (ii) multiple objects (or object groups) that are
designed independently and compete for the same system resources. A new distributed algorithm
for resolving concurrent exceptions has been developed - and several case studies undertaken.
These include one that involved the design, implementation and validation of a control system
for the "Fault-Tolerant Production Cell", a safety-critical system design challenge posed by
FZI (Forschungszentrum Informatik).
Joint work with Jie Xu and Alexander Romanovsky.
In a recent paper, Faloutsos et al. found that the inter Autonomous System (AS) topology exhibits
a power-law vertex degree distribution. This result was quite unexpected in the networking community and
stirred significant interest in exploring the possible causes of this phenomenon. The work of Barabasi
and Albert and its application to network topology generation in the work of Medina et al. have
explored a promising class of models that yield strict power-law vertex degree distributions. In
this talk I will descibe our efforts to re-examine the BGP measurements that form the basis for
the results reported in the paper by Faloutsos et al. We find that by their very nature (i.e., being
strictly BGP-based), the data provides a very incomplete picture of Internet connectivity at the
AS level. I will also describe our attempts to construct and validate a more complete AS-level topology.
The AS peering relationship maps constructed from NLANR data typically miss 20-50% or even more of the
relationships found in our more complete AS maps. Subsequently, we find that while the vertex degree
distributions resulting from the extended maps are heavy-tailed, they deviate significantly from a strict power
law. Finally, we show that available historical data does not support the connectivity-based dynamics assumed
in the paper by Barabasi and Albert. Together, our results suggest that the Internet topology
at the AS level may well have developed over time following a very different set of growth processes than
those proposed by Barabasi and Albert.