A Workshop on
Higher Order Operational Techniques in Semantics

October 28-30, 1995
Isaac Newton Institute for Mathematical Sciences
University of Cambridge, UK

As part of the University of Cambridge Isaac Newton Institute research programme on Semantics of Computation (July-Dec 1995) HOOTS brought together researchers to discuss current developments in operational techniques for the semantics of higher-order languages. We solicited talks on topics including (but not limited to): for higher-order languages such as CCS is perhaps the prime example of an operational theory of a first-order calculus. Study of operational techniques for higher-order languages is now a thriving area, with several outstanding open problems. An important open problem is a theory of program equivalence for Standard ML. Some of these techniques have emerged in distinct communities, e.g., the concurrency, functional programming and type theory communities. The purpose of HOOTS was to bring researchers from these communities together to discuss current trends in the theory of operational semantics, its application to higher-order languages and its connection with more established semantic techniques.

Here is the list of participants. The meeting was organised by A. Gordon and A. Pitts.


Schedule

`*' marks the speaker of co-authored presentations.
All daytime talks are 30 minutes plus 5 for questions.

SATURDAY OCTOBER 28 (Seminar Room One, Newton Institute)

0900-0930	Registration
0930-0940	Welcome
0940-1015	Carolyn Talcott
		Reasoning about Equivalence of Higher Order Actor Programs
1015-1050	David Aspinall and Adriana Compagnoni*
		Subtyping Dependent Types
		Coffee	(30 minutes)
1120-1155	Luca Cardelli
		Operationally Sound Update
1155-1230	Andrew Gordon* and Gareth Rees
		Bisimilarity for Primitive Objects
		Lunch (90 minutes) including 30 minute
		  MLWorks demo in Seminar Room One at 1320
1400-1435	Matthias Felleisen
		Modeling Allocation is Important
1435-1510	Ian Stark
		Reasoning about State in ML
1510-1545	Andrew Pitts
		Operationally-based Logical Relations for Idealized Algol
		Tea

SATURDAY EVENING (King's College)

King's is about twenty minutes from the Newton Institute by foot.
1715-1815	Choral Evensong in King's Chapel, for those so inclined
1815-1845	Reception (Chetwynd Room)
1845-1945	Robin Milner (Keynes Hall)
		Defining Languages via Action Calculi
2000		Banquet (Great Hall)
After dinner	Jo Marks (Great Hall)

SUNDAY OCTOBER 29 (Seminar Room One, Newton Institute)

0930-1005	Doug Howe
		Adding Equivalence Classes to Untyped Lambda-Calculi
1005-1040	Dave Sands
		Improvement Theory and its Applications
		Coffee	(40 minutes)
1120-1155	Kohei Honda
		Composing Processes
1155-1230	Davide Sangiorgi
		Behavioural Equivalences for Higher-Order Process Calculi
		Lunch	(90 minutes)
1400-1435	Nobuko Yoshida
		On Reduction Based Process Semantics
1435-1510	Alan Jeffrey
		Monadic types for the semantics of concurrent functional programming
		Tea	(40 minutes)
1550-1625	Mads Tofte
		A Co-inductive Proof of the Soundness of Region Inference
1625-1700	Dave MacQueen
		Higher-order modules: semantics to implementation

MONDAY OCTOBER 30 (Keynes Hall and Chetwynd Room, King's College)

0930-1005	Jerzy Tiuryn and Mitch Wand*
		Untyped Lambda-Calculus with Input-Output
1005-1040	Scott Smith
		The Coverage of Operational Semantic Techniques
		Coffee	(30 minutes)
1110-1145	John Mitchell
		Labeling Techniques and Typed Fixed-point Operators
1145-1220	Robert Harper
		Typed Closure Conversion
		Lunch	(100 minutes)
1400-1420	Soeren Boegh Lassen
		Operational Reasoning in Action Semantics
1420-1440	Sandip K. Biswas
		Dynamic Slicing in Higher-Order Program
1440-1500	Graham Collins
		Mechanised Applicative Bisimulation

Abstracts


Subtyping Dependent Types

David Aspinall (University of Edinburgh) and Adriana Compagnoni (University of Cambridge)
(PostScript)

We describe a subtyping extension of the Pure Type System lambdaP (an abstract version of LF). This system is the simplest corner of the lambda-cube with type-dependency, yet forms the core of applied type-theories for which subtyping is a desirable extension, for example to give better economy of encodings in LF or to allow automatic adaptation of proofs in \lambdaPRED (the predicate calculus fragment of the Calculus of Constructions).

We establish some meta-theoretic results about the system, including subject reduction, minimal types and decidability.

The combination of subtyping and type-dependency is quite tricky to analyse, mainly because the typing and subtyping relations are mutually defined, which means that tested techniques of examining subtyping in isolation no longer apply. We prove our results using an algorithmic reformulation which breaks some circularity of the definitions, and separates beta-reduction on types and terms.


Dynamic Slicing in Higher-Order Program

Sandip K. Biswas (University of Pennsylvania)

Program Slicing, for first-order imperative programs, is an extensively researched area, with applications in debugging and testing of programs. A first-order imperative program is a sequence of statements. The control-flow graph of such programs can be inferred from the parse tree.

A dynamic slice of a program P, with input I, is a subsequence Q, of statements of P, such that when Q is executed, with input I, the value returned is the same as that of P run on I.

Higher-order languages, like ML, differ from first-order imperative programs in two significant ways. Fistly, we do not have syntactic categories: statements and expressions. The entire program is an expression and deleting arbitrary sub-expressions no longer leaves behind an executable program. Hence, we redefine the deletion of a subexpression by replacement with a no-op term, and provide operational semantics for this extended language. Secondly, the control-flow is substantially more complicated, particularly, in the presence of exceptions.

If a core-ML term M, with assignments and exceptions, evaluates to a value/exception packet then we provide a proof system to compute the subterms of M which can be deleted, such that the deleted term, under the extended operational semantics, still evaluates to the same answer. This computation of subterms, which can be deleted, can be done by instrumenting the ML code of the original term.


Operationally Sound Update

Luca Cardelli (Digital, Systems Research Center)

The type rules needed to typecheck "sufficiently polymorphic" update operations on records and objects are based on unusual assumptions about the structure of the collection of types. These "structural" update rules are sound operationally but not denotationally (in standard models). They arise naturally in type systems for programming, and are not easily avoidable. Thus, we have a situation where operational semantics is more advantageous than denotational semantics.

These operationally sound rules populate quantified types with elements that, by parametricity arguments, should not be there. To resolve this apparent paradox, I describe a translation from a type system with structural rules to a normal type system that supports parametricity.


Mechanised Applicative Bisimulation

Graham Collins (University of Glasgow)

It is often stated that one advantage of functional programming languages, and in particular lazy functional languages, is that they are suitable for formal reasoning. While such reasoning is indeed carried out by some there is both a need and a demand, for tools to make formal reasoning more practical.

Many existing systems fall short of the functionality necessary to support practical reasoning, particularly where reasoning about infinite behaviour is required. Operational techniques offer an approach which is suitable for mechanising in a theorem prover.

This talk describes the current state of the development of a system to support formal reasoning about functional programs within the HOL theorem proving environment. An operational approach is used in the system and one of the key steps in the development is the co-inductive definition of equality of terms as applicative bisimulation.


Bisimilarity for Primitive Objects

Andrew Gordon and Gareth Rees (University of Cambridge)

In earlier work we considered Abadi and Cardelli's first-order calculus of objects, and validated their equational theory via a form of bisimilarity induced by the reduction semantics of objects. The first-order calculus supports only a limited form of subtyping. Abadi and Cardelli removed many of its limitations in their type system based on primitive covariant self types. We will present work in progress on extending our framework to a second-order calculus of objects, with primitive covariant self types and polymorphism.


Typed Closure Conversion

Robert Harper (Carnegie Mellon University)

Type-directed translation is an important technique in compiling functional programs. By propagating type information through the back end of the compiler we may support native data representations, ad hoc polymorphic operations, and tag-free garbage collection. Closure conversion is an important compiler transformation in which functions are separated into pure code, which is shared among all instances, and environment, which is specific to the instance. We consider closure conversion for the simply-typed and polymorphic lambda-calculi as type-directed translations. In the simply-typed case we exploit the Pierce-Turner object type system to characterize closure-converted code. In the polymorphic case we encounter problems with type sharing that are handled using the notion of translucency, a method for controlling type abstraction in module systems.


Composing Processes

Kohei Honda (University of Manchester)

In this talk we discuss an elementary theory of composability of processes and its applications. Given a set of processes and all possible ways of composing them, some composition may be desirable while some may not, in view of the resulting computational behaviour. The field of science which tries to provide a basic framework to a similar problem domain in the setting of sequential computation, is called the study of type disciplines for programming languages, which has offered useful contributions to the extent that well-developed general theories as well as significant applications are at hand. However one may say our current understanding on types for processes is still fragmentary in the sense that we are lacking in a basic mathematical framework to view diverse individual concerns on types for concurrency on a common basis.

The talk introduces a class of typed algebras as an elementary step towards possible unifying frameworks. Necessary mathematical machinery is quite simple, yet the class of typed algebras encompass a wide range of type disciplines for processes, providing a common technical footing for several recent proposals on types for processes such as Milner's sorting and Girard-Lafont's typed nets, suggesting a new type discipline of practical interest, and offering a secure basis for integration. Specifically we show how higher-order nature of sorting can be cleanly integrated with any typed algebra, which is essential to the utilisation of the present framework for reasoning about partial correctness of concurrent programs which allow port names to be communicated, as in CML and PICT. We also discuss abstract characterisation of typability in terms of the underlying partial algebras, in which the typability is completely characterised by a certain modularity principle in composing processes. The result gives us basic understanding on the type disciplines representable in the typed algebras, as well as offering a broader perspective on possible notions of types for processes beyond the present framework.


Monadic types for the semantics of concurrent functional programming

Alan Jeffrey (University of Sussex)
(PostScript)

This talk presents a concurrent functional programming language based on Reppy's CML, but with a type system based on Moggi's computational monads. The operational semantics for the language is presented, and compared with the semantics of a subset of CML.


Operational Reasoning in Action Semantics

S. B. Lassen (BRICS, University of Aarhus) (dvi draft)

This talk reports on ongoing work on developing the theory of action semantics [1] for reasoning about programs. Extending earlier results for a small basic fragment of action semantics [2], this talk captures a much larger functional/declarative fragment which incorporates two kinds of information flow and features higher-order, (countably) nondeterministic, and interleaving computation. Action notation, the specification language of action semantics, is given a reduction semantics and a variety of operational reasoning principles are developed. These techniques are used to establish an inequational theory for action notation and via an action semantic description of a functional language this action theory is applied to prove various functional program equivalences.

[1] P. D. Mosses "Action Semantics". Cambridge UP, 1992.
[2] S. B. Lassen "Basic Action Theory", BRICS tech.report RS-95-25. Aarhus, May 1995.


Defining Languages via Action Calculi

Robin Milner (University of Cambridge)

The notion of action calculus was introduced in an attempt to bring some uniformity into the study of behavioural calculi such as the lambda calculus, the pi calculus, Petri nets, ...

The actions of an action calculus can be drawn as a special kind of graph, called action graphs. Thus lambda reduction and pi-calculus reaction are special cases of graph reduction over such graphs.

Each action calculus is determined by its controls and their reaction rules. Superposition of one calculus upon another is achieved by combining their control-sets, and taking some care over how their reactions interact. In particular, there is a uniform way of lifting any action calculus to higher-order, by superposing on it the (action calculus for the) lambda calculus.

Each action calculus is equipped also with a space of semantic interpretations; these are control structures, which are action structures with additional structure.

All this suggests that action calculi may be suitable for the presentation of both the dynamics and the abstract semantics of programming languages. I shall illustrate how this may be done, without giving too much technical detail. The discussion is made considerably easier by the use of action graphs.


Labeling Techniques and Typed Fixed-point Operators

John Mitchell (Stanford University and Newton Institute)

Labeling techniques for untyped lambda calculus were developed bu Levy, Hyland, Wadsworth and others in the 1970's. A typical application is the proof of confluence from finiteness of developments: by labeling each lambda with an natural number indicating the maximum number of times this symbol may materially participate in a reduction step, we obtain a strongly-normalizing approximation of beta,eta-reduction. Confluence then follows by a syntactic "continuity" or "compactness" argument.

This talk we will focus on applications of labeling to typed lambda calculi with fixed-point operators, including confluence and completeness of left-most reduction for PCF (joint work with B. Howard) and a confluence proof for a variant of typed lambda calculus with subtyping (work with M. Hoang).


Operationally-based Logical Relations for Idealized Algol

Andrew M. Pitts (University of Cambridge)

I will present a notion of binary logical relation for Idealized Algol (IA) which is parameterized by "binary-relations-with-divergence" between states. It yields an applicative characterization of contextual equivalence for IA and provides a relatively straightforward method for establishing some of the equivalences that have been considered in the literature---including those which hinge upon the undefinability of "snapback" in IA. The definition of the logical relation was inspired by recent work of O'Hearn and Reynolds (on translating IA into a predicatively polymorphic linear lambda calculus). The hard work in the operational version lies in establishing the "fundamental theorem of logical relations" for this logical relation. The proof makes use of a compactness property of fixpoint recursion with respect to evaluation of IA phrases.


Improvement Theory and its Applications

David Sands (DIKU, University of Copenhagen)

In this talk I will give an overview of my work on operational theories of improvement and their applications. The initial motivation for considering intensional versions of observational equivalence, was to support reasoning about time behaviour of non-strict functional programs [1]. From this work it was a natural step to consider a more general class of improvement theories for functional languages, providing contextually robust theories of optimisation [2]. However, the real pay-off for this study is not in reasoning about efficiency properties per se, but in reasoning about equivalence: The Improvement Theorem [3] provides a condition for the total correctness of transformations on recursive programs. Roughly speaking, the Improvement Theorem says that if the local steps of a transformation are contained in a particular improvement theory, then correctness of the transformation follows. This result has furnished:

We will also consider some proof techniques for equivalence based on the improvement theory, and some tantalising connections to Sangiorgi's bisimulation up-to expansion and context.
[1] A Naive Time Analysis and its Theory of Cost Equivalence.
The Journal of Logic and Computation, 5(40), pp. 495-541, 1995. (Preliminary version ps)

[2] Operational Theories of Improvement in Functional Languages, Glasgow FP Workshop, 1991. (Postscript)

[3] Total Correctness by Local Improvement in Program Transformation, POPL '95(Extended version ps)

[4] Proving the Correctness of Recursion-Based Automatic Program Transformations, in TAPSOFT '95 (Extended version ps)


Behavioural Equivalences for Higher-Order Process Calculi

Davide Sangiorgi (INRIA, Sophia-Antipolis)
(PostScript)

A higher-order process calculus is a calculus for communicating systems which contains higher-order constructs like communication of terms and term application. Examples are Thomsens's CHOCS, Boudol's gamma-calculus and Sangiorgi's Higher-Order pi-calculus. The main theme of this talk is the meaning of behavioural equivalences on these calculi.

Following Park and Milner, in process algebras the most studied forms of behavioural equivalence are based on the notion of bisimulation. I shall argue that both the standard definition of bisimulation (i.e., the one for CCS and related calculi), as well as Astesiano and Thomsen's higher-order bisimulation are in general unsatisfactory in higher-order calculi, because over-discriminating. To overcome the problem, I resort to Milner and Sangiorgi's barbed bisimulation. The advantage of the latter is that it can be defined uniformly in different calculi --- including higher-order ones. Previous experimentations with CCS and pi-calculus have shown that its congruence, called barbed congruence, yields the desired discriminanting power. I claim that barbed congruence also gives a natural bisimilarity congruence in higher-order calculi. A serious drawback of barbed congruence is that it is hard to use in practice, due to the quantification over contexts in its definition. I shall therefore look for simple direct characterisations of it, i.e. characterisations which do not use such a quantification.


The Coverage of Operational Semantic Techniques

Scott Smith (Johns Hopkins University)

This talk will look at collections of problems operational semantics provides good answers to, partial answers to, or currently offers little help for. Some of the dimensions of programming language design space the talk will address include memories, objects, explicit continuations, concurrency, and types including recursive and polymorphic types. Dimensions of semantic space include equivalence, ordering, PERs, monotonicity, continuity, and algebraicity.


Reasoning about State in ML

Ian Stark (University of Cambridge)

Many functional programming languages rely on the elimination of `impure' features: assignment to variables, exceptions and even input/output. But some of these are genuinely useful, and it is of real interest to establish how they can be reintroducted in a controlled way. This talks looks at one example of this: the interaction between higher-order functions and integer references, in the style of Standard ML.

To illustrate how state can be used in ML, we consider various examples in the form of `contextual' (or `observational') equivalences between expressions of the language. These are accompanied by a range of operational methods for proving such equivalences, based on logical relations.

The simplest of these methods verifies the kind of rearrangements and optimisations a compiler might use. The most powerful method can prove equivalences between higher-order functions that use private store in different ways to achieve the same end.


Reasoning about Equivalence of Higher Order Actor Programs

Carolyn Talcott (Stanford University)
(http://www-formal.stanford.edu/clt/home.html)

Bisimulation techniques provide powerful tools for reasoning about basic process constructs such as composition and action prefixing. It seems more difficult to use these methods to reason about equivalence of higher-order expressions with concurrency constructs, particularly in the presence of a fairness requirement.

In [AMST95] we study the semantics of a language obtained by extending an untyped functional language with primitives for actor computation. An operational semantics is defined using a labeled transition system, and a notion of observational equivalence for actor expressions is defined analogous to testing equivalence.

In this talk we will describe a method developed for proving equivalence of actor expressions based on a notion of uniform computation. We will illustrate the use of this method to prove a variety of equational laws for actor expressions. Although the work was done in the setting of a specific actor language, the method should apply more generally to lambda based concurrent languages with an appropriate operational semantics.

[AMST95] Agha, G. and Mason, I. A. and Smith, S. F. and Talcott, C. L. A Foundation for Actor Computation


Untyped Lambda-Calculus with Input-Output

Mitch Wand (Northeastern University) and Jerzy Tiuryn (University of Warsaw)
(PostScript)

We introduce an untyped lambda-calculus with input-output, based on Gordon's continuation-passing model of input-output. This calculus is intended to allow the classification of possibly infinite input-output behaviors, such as those required for servers or distributed systems. We define two terms to be operationally approximate iff they have similar behaviors in any context. We then define a notion of applicative approximation and show that it coincides with operational approximation for these new behaviors. Last, we consider the theory of pure lambda-terms under this notion of operational equivalence.


A Co-inductive Proof of the Soundness of Region Inference

Mads Tofte (University of Copenhagen)

Region Inference is a technique for implementing programming languages which are based on the typed call-by-value lambda-calculus. At runtime, the store consists of a stack of regions. All values, including function values, are stored in regions. Region Inference is a type-based technique for inferring statically when regions can be allocated and freed.

The purpose of this talk is to present a theorem which states that the region inference rules are sound. The proof relies on establishing a relationship of Consistency between the region-based semantics and a standard operational semantics for the lambda calculus. Consistency is defined as the maximum fixed point of a monotonic operator. The proof reveals that Consistency deteriorates during region-based execution; the soundness relies on the fact that there is always enough consistency left for the remainder of the computation to succeed.


On Reduction Based Process Semantics

Nobuko Yoshida (University of Manchester)

I propose a formulation of semantic theories for processes which does not rely on the notion of observables or convergence. The new construction is based solely on a reduction relation and equational reasoning, but can induce meaningful theories for processes, both in weak and strong settings. The resulting theories in many cases coincide with, and sometimes generalise, observation-based formulation of behavioural equivalence. The basic construction of reduction-based theories is presented, taking a simple name passing calculus (called nu-calculus) and its extensions as an example.


Harlequin Sponsorship

The organisers and the Newton Institute are grateful for generous sponsorship from Harlequin Ltd.