Prof Keith van Rijsbergen
Motivated by the problems arising in image retrieval I will present
some ideas about how to model such retrieval in the context of some
earlier approaches to text retrieval. In particular I will discuss
problems associated with indexing, relevance feedback, browsing when applied
to images. The talk will assume little prior knowledge of Information
Retrieval. Time permitting I will demonstrate a pathdependent browsing tool
implemented to help in the research on image retrieval.
Unconditionally secure communication means that even an infinitely
powerful adversary cannot break the confidentiality nor the
authenticity of the system. Classical results by Shannon dating back
some 50 years seem to imply that unconditionally secure solutions are
doomed to being impractical, if not impossible. However, in recent
years, new research has shown that these results were based on rather
pessimistic assumptions on the amount of information available to an
adversary. It turns out that in many practical scenarios, these
assumptions are not satisfied, e.g., when communciation is noisy, in
large networks where not all nodes can be hacked into, or when
quantum communication is used. In all these settings, unconditional
security is indeed possible. We will survey some results in this
area, with particular emphasis on quantum communcation.
Professor Les Hatton
Scientific software and the science of simulation has grown very quickly to
fill the needs placed upon it by society, which in turn now completely depends
on those simulations. Every area of human activity is affected to some
degree. Unfortunately, whereas the science involved in specifying a
simulation is the subject of a generally careful and effective peer review
system, the resulting software is not. As a result, latent faults in the
software can easily survive simulation testing and cause failure of the
As scientists, we should not forget that doing the science is usually the easy
bit. Producing a defect-free implementation of anything but the most trivial
of simulation algorithms seems entirely beyond the capabilities of computer
science at the present stage of development and is likely to remain so for the
foreseeable future. Society and indeed scientists will therefore have to
learn how to assess and live with this risk.
This seminar is a personal view of this problem and summarises 15 years of
experiments by the author and collaborators in trying to understand the
underlying patterns behind software failure. These will be illustrated using
the results of very large experiments on software failure carried out at
various stages over this period in many different industries. Is it getting
any better ? No. Could we do better ? Yes, much.
A prevailing trend in software engineering is the use of tools which
apparently simplify the problem to be solved. Often, however, this
results in complexity being concealed or "magicked away".
For the most critical of systems, where a credible case for safety and
integrity must be made prior to there being any service experience, we
cannot tolerate concealed complexity and must be able to reason logically
about the behaviour of the system.
The presentation draws to on real life project experience to identify
some historical and current magics and their effect on high integrity
software development; this is contrasted with the cost and quality benefits
that can be made from taking a more logical and disciplined approach.