My talk will look at the ways people have tried to map the Internet over the last thirty years or so. A huge
number of different maps have been produced, with very diverse forms and functions, ranging from simple geographic
plans of cable routes to complex real-time 3D visualisations. They have been produced for a number of distinct
purposes from planning network deployment, operational management, to prove academic theories, as grad student
projects, for market research, for monitoring public policy, for policing and intelligence gathering. And, of course,
many have been motivated to map the Internet for no better reason than because it is there! There are many different
aspects of the Internet that have been mapped from physical infrastructures, logical layers and protocols, traffic flows,
user demographics. The maps cover a range of different scales from LANs up to global scale. Many of these maps are beautiful
and many more are really rather ugly. A few are actually quite useful, but many more are not very helpful at all. However,
all the maps provide a fascinating picture of what the Internet looks like, or rather they provide fascinating insights into
what people think the Internet should look like. I will review a number of the most interesting and useful maps and attempt
to answer the question, what is the best way to map the Internet.
The UK government has launched two consultations on retention of communications data and access to data. The government's aim appears
to be the creation of a comprehensive mandatory regime of data storage that will cover all aspects of location and communication
traffic on almost the entire population. These proposals follow a string of initiatives designed to shift the privacy default in favour
of law enforcement, revenue and national security. In this talk I will outline the threats and benefits of universal surveillance of
communications, and place this assessment into the broader context of the declining state of privacy in Britain.
Simon Davies is Director of Privacy International.
The TLA+ specification language and the TLC model checker are described. Experience using them at Compaq/HP and Intel
for writing and debugging high-level specifications is described, and lessons are drawn. Many popular fads are found to
be irrelevant to high-level specification.
Some time ago, Rob Schapire obtained a remarkable result in computational learning theory: that "weak" classifiers can
be "boosted" using a suitable algorithm such that a properly constructed combination of weak classifiers provides better
performance than might be expected. Since this foundational result appeared applicable versions of the algorithm, and
in particular the AdaBoost algorithm, have been used in many real problems, often with considerable success. In particular
they have been demonstrated in many cases to exhibit a remarkable resistance to the usual "overfitting" problem
in machine learning. In an attempt to explain this useful behaviour, and to better understand it, several theoretical
frameworks have been proposed for the analysis of boosting algorithms, and the field is ripe for further study. This
talk introduces boosting and reviews some of the theory that has appeared.