Return-Path: <John.Harrison-request@cl.cam.ac.uk>
Delivery-Date: 
Received: from leopard.cs.byu.edu (no rfc931) by swan.cl.cam.ac.uk 
          with SMTP (PP-6.5) outside ac.uk; Thu, 13 Jul 1995 14:20:26 +0100
Received: by leopard.cs.byu.edu (1.37.109.15/16.2) id AA167880950;
          Thu, 13 Jul 1995 07:09:10 -0600
Sender: hol2000-request@lal.cs.byu.edu
Errors-To: hol2000-request@lal.cs.byu.edu
Precedence: list
Received: from swan.cl.cam.ac.uk by leopard.cs.byu.edu 
          with ESMTP (1.37.109.15/16.2) id AA167850927;
          Thu, 13 Jul 1995 07:08:47 -0600
Received: from albatross.cl.cam.ac.uk (user drs1004 (rfc931)) 
          by swan.cl.cam.ac.uk with SMTP (PP-6.5) to cl;
          Thu, 13 Jul 1995 14:09:08 +0100
X-Mailer: exmh version 1.5.2 12/21/94
To: hol2000@leopard.cs.byu.edu
Subject: Re: Theory signatures & remaking definitions in HOL
In-Reply-To: Your message of "Mon, 10 Jul 1995 00:13:51 +0200." <95Jul10.001356met_dst.8082@sunbroy14.informatik.tu-muenchen.de>
Mime-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Date: Thu, 13 Jul 1995 14:08:54 +0100
From: Donald Syme <Donald.Syme@cl.cam.ac.uk>
Message-Id: <"swan.cl.cam.:008570:950713130930"@cl.cam.ac.uk>


More hol2000:

> > As far as I can tell, the only really neat solution to this problem is
> > to take the Isabelle approach and store references to context
> > information inside types and terms (certified terms and types in
> > Isabelle).  This eliminates the need for global constant and type
> > symbol tables without a loss of security. In this setting, functions
> > like "mk_comb" do a merge of theory signatures for the two terms.
> 
> Don is right -- this is a problem. However, may I quibble with his
> proposed solution? My knee-jerk response is that working at the term and
> type level may be too low.

Certainly I wouldn't be proposing this if I hadn't seen it work
in Isabelle (I'm not talking from experience of using Isabelle, 
just from what I've read in the Isabelle manuals and source code).

So why might associating terms and types with their theory
signatures be "too low a level"?  There are three possible reasons
I can think of:
	(a) it is fundamentally incorrect, in the sense that terms
and types will no longer model what they are meant to.
	(b) it is infeasible in terms of implementation cost (speed
and memory)
    or (c) the implementation cost outweighs the benefits.

I'll consider (a) here - Isabelle proves feasibility and thus answers
(b), and my original question was really trying to get some help
in answering (c).

It seems to me that if types and terms continue to be represented
in the way they are presently, HOL is doomed to be permanently 
"monotonic", i.e. with only one global theory signature, of which no
constant or type can be redefined.
We all know that HOL only admits well constructed types 
and terms.  If a type or term exists anywhere in a HOL session 
you expect it to be well constructed.  This is great, and not
something we want to lose as the HOL system moves forward.
However, "well-constructed" only has meaning against some 
particular theory signature (i.e. set of constants and type
constructors).   In HOL this is always done against the 
(one and only) global theory signature.  We are faced with the
restraint that if we can create a type or term once, then
it must always be valid against the global theory signature.
Having one global theory signature effectively rules out 
constant redefinition, as it might invalidate the existing
types and terms.  This would be a problem with Konrad's proposed 
solution I believe.

If theory signatures are stored with types and terms (note:
efficiently & without duplication!), these problems
go away.  All terms and types are guaranteed to be well-formed
against the context they carry with them.  Constants can be "redefined"
by creating new logical contexts (which is what is effectively
happening) and then redoing everything that needs to be done (which
has to happen anyway).

From an interface designer's point of view, the monotonicity
of HOL is a real problem, as it restricts the user models 
that can be supported.  For instance, say I build 
a set of interface tools for hardware
verification.  A user comes along, and wants to work on a new
circuit based on work that has been done already on a
similar circuit.  Why can't they concurrently work on
several circuits, in the same way that a word processor user
can concurrently work on several documents (with the advantages
of cut&paste and so on).  I could build this in Isabelle easily, but I
can't in HOL, without fighting the system at each step.  In HOL
I'm pretty much restricted to the ever-growing
theory graph model.  This is simply because the underlying technology
does not support multiple contexts the way it should.

Having a single context is bearable when you only have a command line,
but becomes a problem when visualisation tools let you access
more information simultaneously.

So, in short, to support multiple contexts (which covers the case
of constant redefinition) I think the Isabelle approach looks
very feasible.  Now, getting back to my original question, has
anyone got any ideas about the efficiency issues involved?

Don Syme

-----------------------------------------------------------------------------
At the lab:							     At home:
The Computer Laboratory                                          Trinity Hall
New Museums Site                                                      CB2 1TJ
University of Cambridge                                         Cambridge, UK
CB2 3QG, Cambridge, UK
Ph: +44 (0) 1223 334688				      Ph: +44 (0) 1223 302521
http://www.cl.cam.ac.uk:80/users/drs1004/     
email: Donald.Syme@cl.cam.ac.uk
-----------------------------------------------------------------------------

