Higher Order Operational Techniques in Semantics - Contents

Higher Order Operational Techniques in Semantics

Contents


Preface

This is a collection of articles about recent developments in operational semantics for higher order programming languages, by some of the leading researchers in the field. The idea for the book arose at the workshop on Higher Order Operational Techniques in Semantics (HOOTS) that took place in October 1995. The workshop was organised by the editors of this volume as part of a six-month programme on Semantics of Computation at the Isaac Newton Institute for Mathematical Sciences, Cambridge UK, with financial assistance from the EPSRC and Harlequin Ltd.

Although international conferences and workshops on programming language semantics occur regularly, the HOOTS workshop was unusual for its focus on operational techniques across a range of high-level programming styles. These uses of operational semantics have arisen from a variety of research communities---for example, in concurrency theory, in functional programming, and in type theory---and some of the methods and techniques have emerged only recently. For these reasons the HOOTS workshop was a timely and stimulating meeting. The nine contributions to this volume are original articles which develop in depth many of the themes introduced at the workshop. Each has been refereed in the usual way. We wish to thank all the contributors and referees for their efforts.

Finally, we are grateful to Lindy Rees for drawing the HOOTS logo.

Andrew D. Gordon and Andrew M. Pitts
August 1997


Abstracts

Operational equivalences for untyped and polymorphic object calculi
A. D. Gordon

We survey several definitions of operational equivalence from studies of the lambda-calculus in the setting of the self-calculus, Abadi and Cardelli's untyped object calculus. In particular, we study the relationship between the following: the equational theory induced by the primitive semantics; Morris-style contextual equivalence; experimental equivalence (the equivalence implicit in a Milner-style context lemma); and the form of Abramsky's applicative bisimilarity induced by Howe's format.

We repeat this study in the setting of Abadi and Cardelli's polymorphic object calculus obtained by enriching system F<: with primitive covariant self types for objects. In particular, we obtain for the first time a co-inductive characterisation of contextual equivalence for an object calculus with subtyping, parametric polymorphism, variance annotations and structural typing rules. Soundness of the equational theory induced by the primitive semantics of the calculus has not been proved denotationally, because structural typing rules invalidate conventional denotational models. Instead, we show soundness of the equational theory using operational techniques.

Semantics for core Concurrent ML using computation types
A. Jeffrey

This paper presents two typed higher-order concurrent functional programming languages, based on Reppy's Concurrent ML. The first is a simplified, monomorphic variant of CML, which allows reduction of terms of any type. The second uses an explicit type constructor for computation, in the style of Moggi's monadic metalanguage. Each of these languages is given an operational semantics, which can be used as the basis of bisimulation equivalence. We show how Moggi's translation of the call-by-value lambda-calculus into the monadic metalanguage can be extended to these concurrent languages, and that this translation is correct up to weak bisimulation.

Relational reasoning about contexts
S. B. Lassen

The syntactic nature of operational reasoning requires techniques to deal with term contexts, especially for reasoning about recursion. In this paper we study applicative bisimulation and a variant of Sands' improvement theory for a small call-by-value functional language. We explore an indirect, relational approach for reasoning about contexts. It is inspired by Howe's precise method for proving congruence of simulation orderings and by Pitts' extension thereof for proving applicative bisimulation up to context. We illustrate this approach with proofs of the unwinding theorem and syntactic continuity and, more importantly, we establish analogues of Sangiorgi's bisimulation up to context for applicative bisimulation and for improvement. Using these powerful bisimulation up to context techniques, we give concise operational proofs of recursion induction, the improvement theorem, and syntactic minimal invariance. Previous operational proofs of these results involve complex, explicit reasoning about contexts.

Labeling techniques and typed fixed-point operators
J. C. Mitchell, M. Hoang and B. T. Howard

Labelling techniques for untyped lambda calculus were developed by Lévy, Hyland, Wadsworth and others in the 1970's. A typical application is the proof of confluence from finiteness of developments: by labelling each subterm with a natural number indicating the maximum number of times this term may participate in a reduction step, we obtain a strongly-normalising approximation of beta,eta-reduction. Confluence then follows by a syntactic "compactness" argument (repeated in the introduction of this paper).

This paper presents applications of labelling to typed lambda calculi with fixed-point operators, including confluence and completeness of leftmost reduction for PCF (an "applied" lambda calculus with fixed-point operators and numeric and boolean operations) and a confluence proof for a variant of typed lambda calculus with subtyping. Labelling is simpler for typed calculi than untyped calculi because the fixed-point operator is the only source of nontermination. We can also use the method of logical relations for the labelled typed calculus, extended with basic operations like addition and conditional. This method would not apply to untyped lambda calculus.

Semantics of memory management for polymorphic languages
G. Morrisett and R. Harper

We present a static and dynamic semantics for an abstract machine that evaluates expressions of a polymorphic programming language. Unlike traditional semantics, our abstract machine exposes many important issues of memory management, such as value sharing and control representation. We prove the soundness of the static semantics with respect to the dynamic semantics using traditional techniques. We then show how these same techniques may be used to establish the soundness of various memory management strategies, including type-based, tag-free garbage collection; tail-call elimination; and environment strengthening.

Operational reasoning for functions with local state
A. M. Pitts and I. D. B. Stark

Languages such as ML or Lisp permit the use of recursively defined function expressions with locally declared storage locations. Although this can be very convenient from a programming point of view it severely complicates the properties of program equivalence even for relatively simple fragments of such languages - such as the simply typed fragment of Standard ML with integer-valued references considered here. This paper presents a method for reasoning about contextual equivalence of programs involving this combination of functional and procedural features. The method is based upon the use of a certain kind of logical relation parameterised by relations between program states. The form of this logical relation is novel, in as much as it involves relations not only between program expressions, but also between program continuations (also known as evaluation contexts). The authors found this approach necessary in order to establish the `Fundamental Property of logical relations' in the presence of both dynamically allocated local state and recursion. The logical relation characterises contextual equivalence and yields a proof of the best known context lemma for this kind of language - the Mason-Talcott `ciu' theorem. Moreover, it is shown that the method can prove examples where such a context lemma is not much help and which involve representation independence, higher order memoising functions, and profiling functions.

Improvement theory and its applications
D. Sands

An improvement theory is a variant of the standard theories of observational approximation (or equivalence) in which the basic observations made of a functional program's execution include some intensional information about, for example, the program's computational cost. One program is an improvement of another if its execution is more efficient in any program context. In this article we give an overview of our work on the theory and applications of improvement. Applications include reasoning about time properties of functional programs, and proving the correctness of program transformation methods. We also introduce a new application, in the form of some bisimulation-like proof techniques for equivalence, with something of the flavour of Sangiorgi's ``bisimulation up to expansion and context''.

The coverage of operational semantics
S. F. Smith

Techniques of operational semantics do not apply universally to all language varieties: techniques that work for simple functional languages may not apply to more realistic languages with features such as objects and memory effects. We focus mainly on the characterization of the so-called finite elements. The presence of finite elements in a semantics allows for an additional powerful induction mechanism. We show that in some languages a reasonable notion of finite element may be defined, but for other languages this is problematic, and we analyse the reasons for these difficulties.

We develop a formal theory of language embeddings and establish a number of properties of embeddings. More complex languages are given semantics by embedding them into simpler languages. Embeddings may be used to establish more general results and avoid reproving some results. It also gives us a formal metric to describe the gap between different languages. Dimensions of the untyped programming language design space addressed here include functions, injections, pairs, objects, and memories.

Reasoning about Functions with Effects
C. Talcott

An important advantage claimed for functional programming is the elegant equational theory and the ability to reason about programs by simply replacing equals by equals. Also higher-order functions provide the ability to define high-level abstractions that simplify programming. On the other hand, real programs have effects, and it is often more concise to express a computation using effects. We would like to have the best of both worlds: functions with effects. The methods described in this paper allow one to have the expressive power of functions with effects and to develop rich equational theories for such languages. We extend our previous work on semantics of imperative functional languages (Talcott 1985,1992; Mason and Talcott 1991a, 1992; Honsell, Mason, Smith, and Talcott 1995) by treating the combination of control and memory effects, and developing the semantic theory in a more abstract setting to better capture the essential features of program equivalence.

We adopt the view proposed in (Landin 1966) that a programming language consists of expressions of the lambda calculus augmented with primitive operations. We call such languages lambda-languages. The primitive operations that we have in mind include not only basic constants, branching, and algebraic operations such as arithmetic and pairing, but also operations that manipulate the computation state (store, continuation), and the environment (sending messages, creating processes). The presence of higher-order objects in lambda-languages makes defining and reasoning about program equivalence more complicated than in the first-order case, with or without effects. The methods and results described in this paper arose from a desire to treat a wide range of lambda-languages in a unified manner. Although we only consider sequential languages here, the key ideas seem to extend to primitives for concurrency as well (Agha, Mason, Smith, and Talcott 1997).