From windley@cheetah  Fri Feb 16 11:35:59 1990
Received: by iris.ucdavis.edu (5.57/UCD.EECS.2.0)
        id AA12885; Fri, 16 Feb 90 11:35:59 PST
Received: from cheetah.ucdavis.edu by clover.ucdavis.edu (5.59/UCD.EECS.1.11)
        id AA27938; Fri, 16 Feb 90 11:40:16 PST
Received: by cheetah.ucdavis.edu (AIX  2.1.2/3.14)
        id AA00153; Fri, 16 Feb 90 11:31:21 PST
Message-Id: <9002161931.AA00153@cheetah.ucdavis.edu>
To: info-hol@clover
Subject: 1989 User's Group Meeting Abstracts (1 of 2)
Date: Fri, 16 Feb 90 11:31:20 -0800
From: Phil Windley <windley@cheetah>

% This file contains abstracts of the talks at the 1989 HOL User's Group
% Meeting held in December at Cambridge University.  The abstracts wrer
% prepared by the speakers.  I just collected them.  I have included the
% e-mail addresses of the speakers so that you can contact them with
% questions or requests for further information.  I hope that this
% collection of abstracts will give those of you who were not able to
% attend some information about what was discussed.
%
% The latex source is entirely self-contained and, as far as I know,
% doesn't depend on any obscure tex files found only at Davis.  You should
% be able to tex the file by saying "latex file".  To make the file,
% concatenate the two parts together (prefereably in order ;-) and edit out
% an mail headers, etc.  (these comments can stay since TeX will ignore
% them).
%
% Cheers,
%
% --phil--

% -*- LaTeX -*-  (for emacs)
%**start of header
\documentstyle[12pt,fleqn]{article}

   % ---------------------------------------------------------------------
   % A few parameters
   % ---------------------------------------------------------------------
   \setlength{\unitlength}{1in}           % unit of length = 1in
   \setlength{\baselineskip}{16pt}        % line spacing = 16pt

   \pagenumbering{arabic}                % arabic page numbers
   \setcounter{page}{1}                  % start at page 1

   \setlength{\topmargin}{-.250in}
   \setlength{\textwidth}{6.75truein}
   \setlength{\textheight}{8.5truein}

   % Double sided
   %\oddsidemargin  6.6truemm
   %\evensidemargin -7.4truemm

   % Single Sided
   \setlength{\oddsidemargin}{0pt}
   \setlength{\evensidemargin}{0pt}

   \setlength{\parskip}{.125in}

   % ---------------------------------------------------------------------
   % New commands
   % ---------------------------------------------------------------------

   \newcommand{\talk}[2]{
        \pagebreak\par
        \begin{center}\large\bf #1 \end{center}
        \begin{center} #2 \end{center}
        }

   % ---------------------------------------------------------------------
   % Pagestyle
   % ---------------------------------------------------------------------

   \pagestyle{headings}


\begin{document}

%**end of header

\title{Abstracts from the \\
       1989 HOL Users Group Meeting}

\author{Phillip J. Windley\\
        {\small\rm (Editor)}\\
                \null\\
       {\small\em Division of Computer Science}\\
       {\small\em Department of Electrical Engineering and Computer Science}\\
       {\small\em University of California, Davis}}

\date{December 14-15, 1989}


\maketitle

\centerline{\bf Editor's Note}

\begin{quote}\em
The 1989 HOL User's Group Meeting was held December 14-15, 1989 at Trinity
Hall College, Cambridge University.  This document gives abstracts,
prepared individually by the speakers, of the technical presentations from
the conference.  For more information on a given talk, please contact the
authors directly via electronic mail; their addresses are provided.
\end{quote}


\talk{Keynote Talk:
      Formal Verification: \\
      The Art of Being Economical with the Truth}
     {Keith Hanna\\
      {\em fkh@ukc.ac.uk}\\
      University of Kent, UK}
Design verification can be seen as a game played between society, wishing
to protect itself from disaster, and designers, wishing to promote their
products. Society's interests are entrusted to an authority whose task is
to select a verification method and oversee its application. A verification
method has two components: a formal logic and an interpretation. The
purpose of the former component is (to borrow a phrase) to be ``economical
with the truth;'' its axioms and rules of inference should be few in number
and self-evidently sound.  Higher-order logic can adequately meet this
requirement.  The purpose of the latter component, the interpretation, is
to relate the formal logic to the physical world.  It is vitally important
that this relationship be examined with full rigor; the meaning of each
primitive symbol must be precisely defined, the truth of each axiom and the
intent of each definition must be checked. This activity lies at the
intersection of engineering and logic and thus responsibility (between
designer and logician) for undertaking it can be blurred. This in turn can
lead to a situation where the use of formal methods actually decreases
design integrity; the logician verifies the design against simplistic (i.e.,
false) assumptions and the engineer, knowing that the design will be
``formally verified,'' lets drop his guard.

A criterion sometimes advanced for formal verification is ``Would I fly
it?.'' This is a rather weak test to apply since experience tells us that
most designs do, in fact, work satisfactorily most of the time. A stronger
criterion for formal verification would be game-based, perhaps along the
following lines: ``Would I entrust myself to an apparatus designed and
verified (albeit strictly in accordance with the rules of the verification
method), not by a benign engineer, but rather by an omniscient demon
setting out, with malicious intent, to exploit every weakness in the formal
logic, fragility in its computational implementation and imprecision in its
interpretation?.''

The second part of the talk described {\em VERITAS+}, a design logic having
dependent types and subtypes [1].  Such types are useful for describing the
bounded entities (e.g., n-bit numerals) typically occurring in digital
system specifications.  For example, the function {\tt val} that maps a
numeral to a number may be given the dependent type:
\begin{verbatim}
   val: [b:natp] * [l:nat] -> V(b,l) -> N (b exp l)
\end{verbatim}
(Read as: {\tt val} takes a positive natural number $b$ and a natural
number $l$ and then takes a length-$l$ base-$b$ numeral, and yields a
natural number in the subrange less than $b$ to the power of $l$.)  The
penalty incurred in the use of dependent types is the loss of decidable
type-checking; in practice, a tactic-based approach allows a reasonable
degree of automation to be realized.  The main advantage of dependent types
is the increased clarity of specifications.

\noindent
{\bf References:}
\begin{enumerate}
\item
``Specification and Verification
using Dependent Types,'' Technical Report, University of Kent, 1989.
\end{enumerate}

\newpage
\section{Software and Protocol Verification}
{\bf Session Chair:}  Roger Hale, Cambridge University

\begin{flushleft}
{\bf December 14, Afternoon}

\vspace{.25truein}
{\bf Speakers:}

\begin{tabular}{ll}
John Cullyer    & {\em Railway Signalling}                      \\
                &                                               \\
Rachel Cardell-Oliver &  {\em Specifying and Verifying a Class of Protocols} \\
                &                                               \\
Albert Cammilleri & {\em Mechanizing CSP Trace Theory in HOL}   \\
                &                                               \\
Joakim von Wright & {\em Formalizing Program Refinements}       \\
                &                                               \\
\end{tabular}
\end{flushleft}

\talk{Railway Signalling}
     {John Cullyer\\
      {\em eswjc@warwick.ac.uk}\\
      Warwick Univeristy, UK}

\begin{center}
{\em Abstract not available}
\end{center}

\talk{Ideas for Specifying a Class of Protocols and\\
      Verifying Implementations using Higher Order Logic}
     {Rachel Cardell-Oliver\\
      {\em rco\%uk.ac.cam.cl@nsfnet-relay.ac.uk}\\
      Cambridge Univeristy and \\
      Defense Science and Technology Organization, Australia}
Computer network protocols are members of the class of real time concurrent
programs.  The class of protocols which, say, transfers data from one place
to another in a computer network share not only the same function, but also
the same behavior in the sense of the basic mechanisms which all these
protocols use.

The most obvious similarity between protocols of the same class is their
physical structure.  Each protocol is effected by a sender and receiver
(programs on separate computers) which communicate, only, via a
bi-directional channel.  Protocol structure is captured in HOL using
techniques developed for hardware specification.  Each physical entity
(sender, receiver and channel) is modeled by a higher order predicate and
communication between the entities is modeled by shared parameters to
these predicates.

Less obvious to model is the mechanism for reliable communication over an
unreliable channel: positive acknowledgment.  Each data message which is
transmitted must be acknowledged by a returned message.  Transmission and
reception predicates in the sender and receiver achieve this by
continuously transmitting data or acknowledgments (respectively) and using
local state variables to chart progress.

Properties which vary between protocols of the same class are captured by
parameters of the specification.  For example, window sizes and channel
delays are modeled by parameters of type $sequence$ and $time$
respectively.  The choice of which data message to send next is modeled by
a parameter of type $time \rightarrow sequence$ which, at any time, chooses
the next message to transmit.

The correctness of a protocol depends on complex real time interactions
between the sender and receiver programs and the channel.  The channel's
response is non-deterministic.  For example, messages may be delivered or
lost and if delivered may be delayed for a variable, but bounded, time.
Real time intervals are specified using a constant added or subtracted from
the current time.  For example a timeout occurs at time $t$ if the value of
the sender's state at time $t$ is the same as its state at time
$t-TIMEOUT$.  Protocol liveness is determined by an interval of maximum
persistence, $maxP$, in the sender.  That is, the protocol will be aborted
if the sender's state at time $t$ is the same as its state at time
$t-maxP$.

Verification that the protocol behavior specification discussed above
satisfies the functional specification for a class is still in progress but
near completion.  Earlier specifications are being restructured and the
proofs updated.  The next task is to specify and verify real time protocol
implementations.  The following is a possible route for performing this
task.

\begin{enumerate}
\item
Define mappings between the names of program variables in an implementation
and the variables of the HOL specification.
\item
Define the relationship between the real time scales of the program
and the abstract time scales of the HOL specification.
Techniques for doing this can be found in the work of Tom Melham,
Jeff Joyce, John Herbert.
\item
Show, possibly using Floyd-Hoare proof rules, that the updated values of
program variables correspond to the updated variables of the HOL
specification.  A HOL description of a Floyd-Hoare proof system similar to
the one defined by Mike Gordon could be used for this task.
\item
Interpret real time events such as packet arrival and timeout.  Should
these be modeled using polling, a wait statement which jumps to first
condition to be satisfied, a flag held high until the event is recognized
etc.?
\end{enumerate}

Finally, having shown that a particular implementation is a member of the
class of data transfer protocols, I would like to prove that it is a
non-trivial member of that class by proving properties about the protocol's
performance.  One option for doing this is to assign probabilities to the
non-deterministic choices of the channel and from this deduce real time
performance properties of the protocol.

\talk{Mechanizing CSP Trace Theory in HOL}
     {Albert Cammilleri\\
      {\em ac@acamille@hplb.hpl.hp.com}\\
      Hewlett Packard Laboratories, UK}
In this talk I describe a mechanization of the formal language CSP in
higher order logic.  CSP is one of several process algebras developed for
reasoning about concurrency and communication, and mathematical proof plays
a major part in reasoning using the language.

The CSP semantics modeled in HOL is that of traces. This is
one of the simplest semantic models developed for CSP over the years,
but is nonetheless useful and provides insight into the nature
of the problems involved in mechanizing the process algebra, and
the practicality of the final results.

One thing which immediately becomes evident from the mechanization
is the amount of general mathematics required for reasoning in the
semantics of CSP (e.g. set theory, fixed points). The talk describes
the mechanization of some fundamental CSP notions such as events,
alphabets and traces using the necessary mathematical
foundations, and how a data type is defined to represent CSP
processes. It is shown how the trace semantics of the CSP
operators are defined conservatively on this data type, and how high
level process algebra laws are derived from the definitions of the
operators. Finally it is shown how the adopted semantics can be
separated from the syntax of the language by first defining a
syntactic type of CSP processes using Melham's type definition
package, and then formalizing a denotational semantics for the
language using the trace semantics.

\talk{Formalizing Program Refinements}
     {Joakim von Wright\\
      {\em jwright@finabo.abo.fi}\\
      Abo Academi, Finland}

The refinement calculus is a theory of correctness preserving program
transformations, based on based on Dijkstra's weakest precondition. It has
proved useful for different kinds of program development, including data
refinement and development of parallel programs. We write $S <= S'$ (saying
that $S$ is refined by $S'$) if $wp(S,Q) => wp(S',Q)$ holds for all
predicates $Q$.  Thus the relation $<=$ preserves total correctness.

Program development in the refinement calculus starts from a
specification $S_0$, and through a series
\[
                 S_0 <= S_1 <= ... <= S_n
\]
of transformations, ends with a program $S_n$.

Our goal using HOL is threefold. First, we want to formalize refinement
concepts in HOL (i.e., define a specification language with weakest
precondition semantics). Second, we want to check the proofs of refinement
laws and transformation rules. Third, we want to do program refinements
within the HOL theory of refinement, program development resulting in a
theorem $\vdash~S_0~<=~S_n$ within the refinement theory. Working within
the same theory where the transformation rules were proved correct
guarantees that the basis for the work is solid.

We define predicates semantically, as the type $state\to bool$, where
state is the type $var\to val$. The variables could be strings and the
values natural numbers. Operators on predicates are defined by
lifting, e.g. ($p$ and $q$) is defined as
\[
\lambda~(s:state)~.~s~p \land s~q
\]
Expressions have type $state \to val$. Substitutions on predicates are a
bit tricky to define, since predicates are not syntactically defined.
Predicate transformers have type $pred \to pred$. Among other things we
prove that every monotonic predicate transformer has a least fixpoint.

Commands are a recursive type $cmd$. Our language is Dijkstra's guarded
commands, extended with assertions, nondeterministic assignment and block
with local variables. The semantics is given by a function $wp:cmd \to pred
\to pred$, associating every command with its weakest precondition
predicate transformer. For iteration we give a fixpoint definition, thus
permitting nondeterminism to be unbounded. The refinement relation is
defined in a straightforward way.

As a test of our formalization, we have proved, among other things,
Dijkstra's ``healthiness conditions'' (strictness, monotonicity and
conjunctivity) of the wp function. Strictness and monotonicity were
simple but conjunctivity was hard. In fact we could not prove
conjunctivity of iteration in the case of unbounded nondeterminism
(this seems to require the use of ordinal chains and transfinite
induction).

We have also attempted to prove refinement laws, which various case
studies have shown to be useful. Many general laws were easy to
prove. Rules concerning loops (iteration) were more difficult.
Refinement of assignment statements were also sometimes difficult
to prove. This is because the semantics of assignments involves
substitutions.

An important property of the refinement relation is subcomponent
monotonicity: any subcomponent $S$ of a program $T$ can be replaced by
another program fragment $S'$, if $S<=S'$ holds. This method can be
automated by considering the program $T$ to be an application
$(\lambda~X~.~T(X))S$. We have written an ML function which takes two
arguments: the theorem $\vdash S<=S'$ and the term $\lambda~X~.~T(X)$ and
which returns the theorem
\[
\vdash~T(S)~<=~T(S')
\]
This shows that the principle of program derivation in small steps can be
used.

So far we have only worked with very small examples. Besides aiming at
larger examples we are also trying to make the theory more general. This
involves extending the language with e.g., recursion and multiple
assignments. The block construct has proved to be a problem; it seems
necessary to index the refinement relation with a list of global variables
in order to make the theory strong enough to handle local variables. This
will also be a focus of future work.

\newpage
\section{Hardware Verification Session}
{\bf Session Chair:}  John Herbert, Cambridge University

\begin{flushleft}
{\bf December 15, Morning}

\vspace{.25truein}
{\bf Speakers:}

\begin{tabular}{ll}
Luc Claesen     & {\em CHEOPS: Interfacing HOL to CATHEDRAL}    \\
                &                                               \\
Shiu-Kai Chin   & {\em Combining Hardware Synthesis and Silicon Compilation} \\
                &                                               \\
David Sheperd   & {\em SAFEMOS}                                 \\
                &                                               \\
Phillip J. Windley & {\em Hierarchical Verification of Microprocessors} \\
                &                                               \\
\end{tabular}
\end{flushleft}

\talk{CHEOPS: Interfacing HOL\\
      to CATHEDRAL}
     {Luc Claesen\\
      {\em claesen@imec.be}\\
      IMEC, Belgium
      }

A basic research action cooperation among IMEC, the HOL-group at the
University of Cambridge and Philips (Dr. A. Kalker) will concentrate
around the formal proof of steps in the CATHEDRAL-synthesis systems.

The CATHEDRAL synthesis systems are silicon compilers dedicated towards
the automatic synthesis of digital signal processing systems. These systems
are encountered in telecommunication, digital audio, radar etc..
Cathedral-1 is targeted towards bit-serial implementations of signal
processing systems [1].
Cathedral-2 does the synthesis in multi-processor
dedicated micro-programmed chip architectures [2].
The synthesis in Cathedral-2 starts from a high level algorithm
specification in the applicative SILAGE language. In a first step, a
customized datapath allocation is done with a tool called Jack-the-Mapper.
This tools builds up the datapaths according to the requirements from the
algorithm. It allows to make tradeoffs between area and speed. After the
datapath allocation a scheduling of the operations on the datapath is done
with the ATOMICS program. The number of busses can be reduced by bus
merging at the cost of a few additional cycles. In Cathedral-2 a generic
controller structure is used. After the scheduling the controllers are
generated in the CGE program. At this point the interconnection of all
physical building blocks is known and the layout phase can start. In
Cathedral-2 a number of predefined parameterized modules exist (ALU, ACU,
MULT, ...) that can be adapted in a very flexible way to the application at
hand.

In this project the goal is to use the HOL system to try to verify the
results of the synthesis processes in CATHEDRAL. The development of
complicated proofs and theories will be payed off by their more frequent
use in the cross check of a synthesis process. Even in more established
so called "correct-by-construction"  processes such as standard-cell or
gate array layout generation, cross checks are done via tools such as
netlist comparison. Probably not all of the tasks in the CATHEDRAL silicon
compilation environment will be as tractable to be used for correctness
proof by HOL. Therefore individual aspects of the CATHEDRAL system will be
attacked first, starting with the parameterized module libraries.
The future goal is to develop constructive methods, based on transformational
design, that will result in proven synthesis steps in CATHEDRAL as well
as in manual design steps.

\noindent
{\bf References:}
\begin{enumerate}
\item
``Custom design of a VLSI PCM-FDM transmultiplexer from
system specifications to circuit layout using a computer aided
design system'', R. Jain, F. Catthoor, J. Vanhoof, B. Deloore, G. Goosens,
N. Goncalves, L. Claesen, H. Van Ginderdeuren, J. Vandewalle, H.
De Man, Joint special issue of the IEEE transactions on
Circuits and Systems Vol. CAS-33, No.2, pp. 183-195, February 1986,
and the IEEE Journal of Solid-State Circuits Volume SC-21, No.1, pp.
73-85, February 1986, on VLSI analog and digital signal processing.
\item
``CATHEDRAL-II: A Silicon Compiler for Digital Signal Processing
Multiprocessor VLSI Systems'', H. De Man, J. Rabaey, P. Six, L. Claesen,
IEEE Design and Test of Computers, Vol.3, Nr.6, December 1986, pp.13-26.
\end{enumerate}


\talk{Combining Hardware Synthesis and\\
      Silicon Compilation}
     {Shiu-Kai Chin\\
      {\em chin@sutcase.case.syr.edu}\\
      Syracuse University}

One of the main objectives of this effort is to increase the level of
machine-executable and machine-verified design knowledge.  The rate of
increase in the level of abstraction of design procedures must track
corresponding increases in circuit integration to keep the level of human
design effort for large systems manageable.  The availability of
machine-executable and machine verified design procedures enables designers
to design at a higher level of abstraction with confidence.  Towards this
end, several ongoing efforts have been undertaken and are summarized below.
\begin{enumerate}\item
Verified design functions for inner product hardware.

Several higher order functions used to create inner product hardware have
been described and verified in HOL.  The synthesis functions support
general signed-binary representations and are parametric in wordsize
and interconnection schemes.  Some of the results are reported in [1,2,3].
\item
Linkages to silicon compilers.

One of our objectives is to use the above functions to create designs which
are then layed out using a commercial cell library and silicon compiler.
Currently, we are using Silicon Compiler System's Generator Development
Tools (GDT).  GDT supports both full custom, semi-custom, and standard cell
design.  GDT uses parametric cell generators to instantiate specific
layouts and is capable of creating a specific layout from a register
transfer level schematic consisting of standard and user defined cells.

We also intend to incorporate the synthesis functions described above
into Mike Fourman's LAMBDA system [4].  Our hope is that schematics generated
by LAMBDA using verified design rules can be used as inputs for GDT.

\item
An experiment verifying an abstract machine for a declarative language
\end{enumerate}

As part of another project, we are developing a language which combines
both functional and logic programming paradigms within a reduction setting.
This language is an extension of J. Alan Robinson's LOGLISP language.

The abstract machine underlying the language [5,6] is a combination of
the Three Instruct Machine (TIM) [7] and the Warren Abstract Machine (WAM)
[8].  We have described the semantics of the language in [6] using Plotkin
style operational semantics [9].  Proofs relating the abstract machine to
the language have been done using the CLIO theorem prover.

Also, preliminary designs have been done to create hardware supporting
the abstract machine.  The VLSI designs have been done using the GDT
tool set.

\noindent
{\bf References}
\begin{enumerate}
\item
    Chin, S.-K. and Stabler, E.P., "Synthesis of Arithmetic Hardware Using
    Hardware Meta-Functions", to appear in IEEE Transactions on
    Computer-Aided-Design, 8/90.
\item
    Chin, S.-K., "Verified Synthesis Functions for Negabinary Arithmetic
    Hardware", Applied Formal Methods for Correct VLSI Design, Luc Claesen
    Editor, Elsevier.
\item
    Chin, S.-K., ``Combining Engineering Vigor with Mathematical Rigor,''
    Hardware Specification, Verification and Synthesis: Mathematical Aspects,
    Editors: M. Leeser and G. Brown, Lecture Notes in Computer Science,
    Volume 408, Springer-Verlag, 1990.
\item
    Fourman, M. P., Harris, R. L., ``Lambda - Logic and Mathematics Behind
    Design Automation,''Abstract Hardware Limited, Uxbridge, UB8 3PH, U.K.
\item
    Jamsek, D., Greene, K.J., Chin S.-K., Humenn, P.R.,  ``WINTER: Wams IN
    Tim Expression Reduction,'' Proceedings of the North American Logic
    Programming Conference, Cleveland, Ohio, October 16-19, 1989.
\item
    Jamsek, D.A., Chin, S.-K. Structured Operational Semantics for a Combined
    Function and Logic Programming Language, CASE Center Technical Report
    No. 8915, Syracuse University, Syracuse, NY, January 23, 1990.
\item
    Fairbairn, J., Wray, S., ``TIM: a simple lazy abstract machine to execute
    supercombinators,'' Proceedings of the Functional Languages and Computer
    Architecture Conference, 1987.
\item
    Warren, D.H.D., ``An abstract PROLOG instruction set,'' SRI International
    Technical Note 306, 1983.
\item
    Plotkin, G.D., ``A Structured Approach to Operational Semantics,''
    Technical Report DAIMI FN-19, Computer Science Department, Aarhus
    University, September 1981.
\end{enumerate}


\talk{SAFEMOS}
     {David Sheperd\\
      {\em des@inmos.com}\\
      INMOS Limited, UK}

\def\implies{\Rightarrow}

\paragraph{Apology}
I started with an apology that I had expected to have several weeks in
which to prepare material to present at the meeting but that this time had
suddenly disappeared when I was given the task of sorting out the INMOS end
of an ESPRIT project proposal. Hence my talk would consist of a brief
introduction to the SAFEMOS project followed by some material I had
prepared earlier but not used.

\paragraph{Project Reports}

\subparagraph{SAFEMOS}
The SAFEMOS\footnote{SAFEMOS is {\em not\/} and acronym.} project is an UK
DTI IED project whose partners are

\begin{itemize}
\item INMOS ltd
\item SRI International Cambridge Research Center
\item Oxford University Programming Research Group
\item Cambridge University Computer Laboratory
\end{itemize}

The aim of the project is to ``demonstrate the possibility of totally
verified systems''. This will be achieved by designing a language (similar
to occam), a program verifier for that language, a verified compiler and a
verified processor. Using these an verified program can be run securely
using a verified processor and compiler. The project had just officially
started and would last 3 years. Verification work would be based around
HOL.


\subparagraph{DECISIVE}
Also mentioned was the proposed ESPRIT DECISIVE\footnote{DECISIVE is an
acronym of DEsign of Correct Industrial Systems by Interactive
VErification.} project. Proposed partners for this are

\begin{itemize}
\item Siemens
\item SGS-Thomson Microelectronics
\item INMOS Ltd
\item Abstract Hardware Limited
\item Harlequin Ltd
\item Laboratory for the Foundations of Computer Science (Edinburgh University)
\item CEAB
\end{itemize}

Part of this project would aim to take the results from projects like
SAFEMOS and to produce industrially usable tools for verified design.
Areas of interest at INMOS included verified synthesis of hardware and

