Mads Tofte - University of Copenhagen

A Polymorphic Type Discipline for Solving Year 2000 Problems

Without comparison, the most pressing problem for the industry of computing is the Year 2000 problem.

In this talk we explain what the Year 2000 problem is and show its close connection to type theory. We present a new type discipline which allows users to find and correct type year 2000 problems in COBOL programs. The type discipline is implemented in a tool called AnnoDomini, which is sold as a commercial product for remediation of IBM OS/VS COBOL programs. Although developed specifically for business applications, AnnoDomini borrows heavily from research in programming languages. AnnoDomini is written in Standard ML, it provides users with abstract (year) types, it is implemented using unification-based type inference, it was specified using operational semantics, and the core of its design was guided by formulating and proving theorems.

Tony Hoare - University of Oxford

Objects: A Trace Model

Object-oriented programs are notoriously prone to the following kinds of error

  1. Following a null pointer
  2. Deletion of an accessible object
  3. Failure to delete an inaccessible object
  4. Interference due to equality of pointers
  5. Inhibition of optimisation due to feThe Alpha Pulse Tandom Generatorar of (4)

Type disciplines and object classes are a great help in avoiding these errors. Stronger protection may be obtainable with the help of assertions, particularly invariants, which are intended to be true before and after each method that updates the structure of the heap. This talk introduces a mathematical model and language for the formulation of assertions about objects and pointers, and a calculus to help avoid the errors listed above. It deals with both garbage-collected heaps and the other kind.

Derek McAuley - Microsoft Research Limited

In a Network, Whither Netwrok Intelligence

Tunnelling protocols such as variants of secure IP and those used in (currently proprietary) Virtual Private Networking start to undermine some of the common Internet traffic assumptions. Similarly some techniques that might be employed in performance engineering of large web servers, would be contrary to current recommendations.

Both scenarios involve the behaviour of an aggregation of traffic rather than individual streams, where intelligence about the components is in the end or edge systems.

The talk will cover some of the work underway at Microsoft Research to investigate how we might partition network control across domain boundaries to effectively deal with these aggregates.

Mixed Reality Boundaries

Steven Benford - University of Nottingham

Mixed reality boundaries establish transparent windows between physical and virtual spaces. One way of achieving this is to project a view of a virtual space into a physical space while simultaneously texture mapping a video view of the physical space into the virtual one. This seminar will introduce a set of properties that allow such boundaries be configured to support different styles of co-operative activity. These properties are grouped into three categories: permeability (properties of visibility, audibility and solidity); situation (properties of location, alignment, mobility, segmentation and spatial consistency); and dynamics (properties of lifetime and configurability). The seminar will explore how each of these properties can be technically realised. It will also describe and compare two contrasting demonstrations, a performance and an office-door, that rely on different property configurations.

Three Variations of the Taylor Series Method for ODEs

Arthur Norman - University of Cambridge

This talk describes three linked activities bound together by their relationship to solving differential equations.

The first is the recovery of 25-year-old BCPL code through use of an automated BCPL to C translator, where the BCPL program is one that reads in sets of equations and generates FORTRAN code that can be used to solve them. This part of the activity brings out issues of how code can be lost, and of the challenges involved in recovering it, as well as providing an excuse to give a reprise of these particular numerical methods.

The middle part discusses a replacement for this old code being built in Java and again as well as considering the numerical aspects of the task it looks at issues of program portability, flexibility and packaging. The Java code is intended to be informed by lessons from its very much older BCPL predecessor - both ones that were recognised while that project was active and ones that are visible with the benefit of all that hind-sight.

The final section discusses a scheme for using Taylor Series methods for solving systems of stiff equations (and also DAEs) which Ray Zahar has proposed and analysed, but which has not been implemented in an easy to use form before.

Stephen Muggleton - University of York

Learning Logic and Language

Inductive Logic Programming (ILP) is the area of Computer Science which deals with the induction of hypothesised predicate definitions from examples and background knowledge. Logic programs are used as a single representation for examples, background knowledge and hypotheses. ILP is differentiated from most other forms of Machine Learning (ML) both by its use of an expressive representation language and its ability to make use of logically encoded background knowledge. This has allowed successful applications of ILP in areas such as molecular biology and computational chemistry.

The area of Computational Learning of Natural Language in Logic (LLL) is producing a number of challenges to existing ILP theory and implementations. In particular, language applications of ILP require revision and extension of a hierarchically defined set of predicates in which the examples are typically only provided for predicates at the top of the hierarchy. New predicates often need to be invented, and complex recursion is usually involved. Similarly the term structure of semantic objects is far more complex than in other applications of ILP. Advances in ILP theory and implementation related to the challenges of LLL are already producing beneficial advances in other sequence-oriented applications of ILP. In addition LLL is starting to develop its own character as a sub-discipline involving the confluence of computational linguistics, machine learning and logic programming.

Ronan Sleep - University of East Anglia

Event Stream Analysis

Data mining is based on the observation that large corporate bodies have accumulated considerable amounts of computer readable data. In some cases there is nearly half a century of data which can be mined using data reduction and visualisation techniques guided by human insight.

Event Stream Analysis is based on the observation that an ever-increasing amount of human activity is mediated by computer. Sometimes this mediation is explicit, as when the conventional keyboard/screen interface is used. Mediation occurs less obviously through the huge number of embedded chips employed in interfaces for consumer products. If we could only tap into all these chips, we would have available a massive dynamic source of data which, if successfully mined, could be exploited in a number of ways.

The potential of Event Stream Analysis will be illustrated during the seminar using experience obtained during the development of after action review technology for MoD/DERA. It is conjectured that the most profound consequence of ESA technology may be to place the study of interactions between human beings on a firm experimental footing.

Ian Pratt - University of Cambridge

Optimising I/O

I/O performance has not kept pace with improvements made in other areas of computer architecture. A key cause of this is that little has changed in the design of I/O hardware and software over the last 20 years. Many decisions that were reasonable when first introduced now result in performance bottlenecks in today's systems.

The Computer Laboratory, along with other institutions, has been investigating ways of optimising I/O subsystems. This talk describes this work, outlining the changes in both hardware and software that have been proposed in order to streamline application's access to I/O facilities. Many of these ideas look set to adopted by the mainstream computer industry, promising big improvements in the I/O performance of future PC's.