To categorise is to respond differentially to certain kinds of input. As such, it is a very general form of behaviour, covering everything from Pavlovian and instrumental responding to the chess master pondering how to respond to a chess move. Once a system has a repertoire of categories grounded in sensorimotor interaction with their members, higher-order categories can be formed in two ways: Direct sensorimotor interaction, as with the ground-level categories, or symbolic interaction, based on combining and recombining the names of lower-order categories into propositions describing new higher-order ones. I will present some behavioural and computational data comparing these two strategies.
The talk begins with a simple and abstract characterization of dialogue in terms of mental state changes of dialogue participants to raise three fundamental questions for any theory of dialogue. It goes on to discuss currently popular accounts of dialogue with respect to these three questions. Next, the notion of `conversational game' is revisited within a probabilistic and decision theoretic framework, and it is argued that such an interpretation is plausible both intuitively and as the basis for computational implementation. An illustrated sketch of a proposed implementation using Bayesian networks is described.
Having been researched for two decades, temporal database technology has reached a level of maturity and sophistication that clearly illustrates that the support for time-varying data provided by current database products is very far from what is achievable.
Assuming no prior knowledge of temporal databases, this talk introduces the audience to temporal databases. Following some motivation, it provides an overview of fundamental and important temporal database management issues and concepts.
The digital age has been heralded as the dawn of the Information Society, in which global networks deliver information, whether fiction or non- fiction, entertainment, education or commercial, at the touch of a key and across any distance. The reluctance of the creators and providers of information to embrace the possibilities of the digital age, however, is born of a deep suspicion that the main opportunity offered by digital technology will be to those who wish to obtain the product without payment. The European Commission identified the problem in the late 1980's, when the ESPRIT programme introduced the topic of "electronic copyright" into its fields for collaborative R&D, and formalised the problems in the report of Commissioner Martin Bangemann on the Information Society, which made it clear that steps needed to be taken to accommodate IPR protection in some shape or form in the global information infrastructure. Under ESPRIT IV, the IMPRIMATUR project (ESPRIT Project 20676) was established to examine the degree to which consensus could be established across the information industries on the way in which IPRs could be accommodated. The project is addressing legal, business, technical and standards areas, and in parallel to this activity is developing and trialling an experimental Web- based information trading system reflecting the developing consensus. During the Seminar, there will be an on-line demonstration of the IMPRIMATUR system.
The STOW group at XRCE Cambridge has been conducting fieldwork-based studies of a number of organisations where networked technologies are being introduced or considered. Our studies suggest current research and design practice fails to recognise the problems which can occur when such technologies cross social boundaries. In this talk we will illustrate this claim with examples from a number of domains we have studied, defining and expanding on the notion of a social boundary and ways in which such boundaries are manifest in work settings. We will show how an attention to social boundaries can provide a powerful lens through which to view an organisation or organisations into which networked technologies are to be introduced. Finally we will present some preliminary ideas on how a focus on social boundaries might more directly inform the design and development of new networked technologies.
Machine learning is a thriving cross-disciplinary field with growing industrial and commercial importance. In this talk I will present an overview of my current research interests, emphasising their mathematical foundations in probability theory, and highlighting some of the many links to other areas of computer science. I will also outline my view of the important directions for future research.
The seminar will start by introducing the reasons for building distributed service systems over traditional client-server systems, We will talk about the different architectures, tools and standards available and how these can be leveraged to increase productivity. We will also cover the issues which need to be addressed when building distributed systems.
Throughout the talk we will use examples from deployed systems at SBC Warburg to show what can be achieved today and the current difficulties facing builders of distributed service systems.
Most directors and managers have heard about the "millennium time-bomb". Most believe that it is mainly a problem for older data-processing systems that could fail at the end of the century when the two-digit year moves from 99 to 00. They are wrong. Year 2000 problems span the entire business, from the factory floor to the executive washroom and from the supply chain to product liability. Many systems will fail long before the end of the century. For most organisations, resolving their year 2000 issues will either be the largest project they have ever undertaken successfully, or the last they have the opportunity to attempt. The problems are compounded by decades of poor software engineering, so that repairing systems is unnecessarily expensive, time-consuming and error-prone.
There have been predictions that 10% of companies will go out of business, and that unemployment will rise to 6 million in the UK, with similar economic impact in other countries. Local authorities and emergency services are starting to make plans to avoid civil disorder. No-one knows how bad the disruption will be or how long it will last, but the uncertainty itself could depress the capital markets and cause economic damage.
Martyn Thomas has worked with major companies around the world on their Year 2000 programmes. In this talk he describes what he has seen, and attempts to draw some conclusions.
The last few years have seen a rapid growth in the development of systems that adopt a spatial approach to the presentation of computer based information. This has been fuelled by the increasingly ubiquitous nature of the Internet and the maturing of 3D interaction techniques. This growth is reflected in the development of systems such as the palace, alphaworld and the emergence of the contact consortium.
However, despite the large number of research and commercial explorations into virtual environments (including shared, multi-user virtual environments), little consideration has been given to the development of heterogeneous large scale landscapes within these environment. Developers are provided little or no support for the construction and realisation of shared virtual environments. This problem is compounded by the very nature of these environments in that they models they exploit and use provide very little support for rich semantic behaviours.
This talk will consider the issues to emerge in the development of future multi-user virtual environments and the research challanges that need to be addressed. In particular, it will focus on the development of a richer semantic model for virtual worlds and the techniques needed to develop richer more interactive virtual environments.