Theory Logic

(*:maxLineLen=78:*)

theory Logic
imports Base
begin

chapter ‹Primitive logic \label{ch:logic}›

text ‹
  The logical foundations of Isabelle/Isar are that of the Pure logic, which
  has been introduced as a Natural Deduction framework in citepaulson700.
  This is essentially the same logic as ``λHOL›'' in the more abstract
  setting of Pure Type Systems (PTS) cite"Barendregt-Geuvers:2001",
  although there are some key differences in the specific treatment of simple
  types in Isabelle/Pure.

  Following type-theoretic parlance, the Pure logic consists of three levels
  of λ›-calculus with corresponding arrows, ⇒› for syntactic function space
  (terms depending on terms), ⋀› for universal quantification (proofs
  depending on terms), and ⟹› for implication (proofs depending on proofs).

  Derivations are relative to a logical theory, which declares type
  constructors, constants, and axioms. Theory declarations support schematic
  polymorphism, which is strictly speaking outside the logic.‹This is the
  deeper logical reason, why the theory context Θ› is separate from the proof
  context Γ› of the core calculus: type constructors, term constants, and
  facts (proof constants) may involve arbitrary type schemes, but the type of
  a locally fixed term parameter is also fixed!›


section ‹Types \label{sec:types}›

text ‹
  The language of types is an uninterpreted order-sorted first-order algebra;
  types are qualified by ordered type classes.

  
  A ‹type class› is an abstract syntactic entity declared in the theory
  context. The ‹subclass relation› c1 ⊆ c2 is specified by stating an
  acyclic generating relation; the transitive closure is maintained
  internally. The resulting relation is an ordering: reflexive, transitive,
  and antisymmetric.

  A ‹sort› is a list of type classes written as s = {c1, …, cm}›, it
  represents symbolic intersection. Notationally, the curly braces are omitted
  for singleton intersections, i.e.\ any class c› may be read as a sort
  {c}›. The ordering on type classes is extended to sorts according to the
  meaning of intersections: {c1, … cm} ⊆ {d1, …, dn}› iff ∀j. ∃i. ci ⊆
  dj. The empty intersection {}› refers to the universal sort, which is the
  largest element wrt.\ the sort order. Thus {}› represents the ``full
  sort'', not the empty one! The intersection of all (finitely many) classes
  declared in the current theory is the least element wrt.\ the sort ordering.

  
  A ‹fixed type variable› is a pair of a basic name (starting with a '›
  character) and a sort constraint, e.g.\ ('a, s)› which is usually printed
  as αs. A ‹schematic type variable› is a pair of an indexname and a sort
  constraint, e.g.\ (('a, 0), s)› which is usually printed as s.

  Note that ‹all› syntactic components contribute to the identity of type
  variables: basic name, index, and sort constraint. The core logic handles
  type variables with the same name but different sorts as different, although
  the type-inference layer (which is outside the core) rejects anything like
  that.

  A ‹type constructor› κ› is a k›-ary operator on types declared in the
  theory. Type constructor application is written postfix as 1, …, αk)κ›.
  For k = 0› the argument tuple is omitted, e.g.\ prop› instead of ()prop›.
  For k = 1› the parentheses are omitted, e.g.\ α list› instead of
  (α)list›. Further notation is provided for specific constructors, notably
  the right-associative infix α ⇒ β› instead of (α, β)fun›.

  The logical category ‹type› is defined inductively over type variables and
  type constructors as follows: τ = αs | ?αs | (τ1, …, τk)κ›.

  A ‹type abbreviation› is a syntactic definition (vecα)κ = τ› of an
  arbitrary type expression τ› over variables vecα›. Type abbreviations
  appear as type constructors in the syntax, but are expanded before entering
  the logical core.

  A ‹type arity› declares the image behavior of a type constructor wrt.\ the
  algebra of sorts: κ :: (s1, …, sk)s› means that 1, …, τk)κ› is of
  sort s› if every argument type τi is of sort si. Arity declarations
  are implicitly completed, i.e.\ κ :: (vecs)c› entails κ ::
  (vecs)c'› for any c' ⊇ c›.

  
  The sort algebra is always maintained as ‹coregular›, which means that type
  arities are consistent with the subclass relation: for any type constructor
  κ›, and classes c1 ⊆ c2, and arities κ :: (vecs1)c1 and κ ::
  (vecs2)c2 holds vecs1vecs2 component-wise.

  The key property of a coregular order-sorted algebra is that sort
  constraints can be solved in a most general fashion: for each type
  constructor κ› and sort s› there is a most general vector of argument
  sorts (s1, …, sk)› such that a type scheme s1, …, αsk)κ› is of
  sort s›. Consequently, type unification has most general solutions (modulo
  equivalence of sorts), so type-inference produces primary types as expected
  cite"nipkow-prehofer".
›

text %mlref ‹
  \begin{mldecls}
  @{define_ML_type class = string} \\
  @{define_ML_type sort = "class list"} \\
  @{define_ML_type arity = "string * sort list * sort"} \\
  @{define_ML_type typ} \\
  @{define_ML Term.map_atyps: "(typ -> typ) -> typ -> typ"} \\
  @{define_ML Term.fold_atyps: "(typ -> 'a -> 'a) -> typ -> 'a -> 'a"} \\
  \end{mldecls}
  \begin{mldecls}
  @{define_ML Sign.subsort: "theory -> sort * sort -> bool"} \\
  @{define_ML Sign.of_sort: "theory -> typ * sort -> bool"} \\
  @{define_ML Sign.add_type: "Proof.context -> binding * int * mixfix -> theory -> theory"} \\
  @{define_ML Sign.add_type_abbrev: "Proof.context ->
  binding * string list * typ -> theory -> theory"} \\
  @{define_ML Sign.primitive_class: "binding * class list -> theory -> theory"} \\
  @{define_ML Sign.primitive_classrel: "class * class -> theory -> theory"} \\
  @{define_ML Sign.primitive_arity: "arity -> theory -> theory"} \\
  \end{mldecls}

   Type ML_typeclass represents type classes.

   Type ML_typesort represents sorts, i.e.\ finite intersections of
  classes. The empty list ML[]: sort refers to the empty class
  intersection, i.e.\ the ``full sort''.

   Type ML_typearity represents type arities. A triple (κ, vecs, s)
  : arity› represents κ :: (vecs)s› as described above.

   Type ML_typetyp represents types; this is a datatype with constructors
  MLTFree, MLTVar, MLType.

   MLTerm.map_atyps~f τ› applies the mapping f› to all atomic types
  (MLTFree, MLTVar) occurring in τ›.

   MLTerm.fold_atyps~f τ› iterates the operation f› over all
  occurrences of atomic types (MLTFree, MLTVar) in τ›; the type
  structure is traversed from left to right.

   MLSign.subsort~thy (s1, s2)› tests the subsort relation s1 ⊆
  s2.

   MLSign.of_sort~thy (τ, s)› tests whether type τ› is of sort s›.

   MLSign.add_type~ctxt (κ, k, mx)› declares a new type constructors κ›
  with k› arguments and optional mixfix syntax.

   MLSign.add_type_abbrev~ctxt (κ, vecα, τ)› defines a new type
  abbreviation (vecα)κ = τ›.

   MLSign.primitive_class~(c, [c1, …, cn])› declares a new class c›,
  together with class relations c ⊆ ci, for i = 1, …, n›.

   MLSign.primitive_classrel~(c1, c2)› declares the class relation
  c1 ⊆ c2.

   MLSign.primitive_arity~(κ, vecs, s)› declares the arity κ ::
  (vecs)s›.
›

text %mlantiq ‹
  \begin{matharray}{rcl}
  @{ML_antiquotation_def "class"} & : & ML_antiquotation› \\
  @{ML_antiquotation_def "sort"} & : & ML_antiquotation› \\
  @{ML_antiquotation_def "type_name"} & : & ML_antiquotation› \\
  @{ML_antiquotation_def "type_abbrev"} & : & ML_antiquotation› \\
  @{ML_antiquotation_def "nonterminal"} & : & ML_antiquotation› \\
  @{ML_antiquotation_def "typ"} & : & ML_antiquotation› \\
  @{ML_antiquotation_def "Type"} & : & ML_antiquotation› \\
  @{ML_antiquotation_def "Type_fn"} & : & ML_antiquotation› \\
  \end{matharray}

  rail@@{ML_antiquotation class} embedded
  ;
  @@{ML_antiquotation sort} sort
  ;
  (@@{ML_antiquotation type_name} |
   @@{ML_antiquotation type_abbrev} |
   @@{ML_antiquotation nonterminal}) embedded
  ;
  @@{ML_antiquotation typ} type
  ;
  (@@{ML_antiquotation Type} | @@{ML_antiquotation Type_fn}) embedded
  ›

   @{class c}› inlines the internalized class c› --- as ML_typestring
  literal.

   @{sort s}› inlines the internalized sort s› --- as ML_typestring
  list literal.

   @{type_name c}› inlines the internalized type constructor c› --- as
  ML_typestring literal.

   @{type_abbrev c}› inlines the internalized type abbreviation c› --- as
  ML_typestring literal.

   @{nonterminal c}› inlines the internalized syntactic type~/ grammar
  nonterminal c› --- as ML_typestring literal.

   @{typ τ}› inlines the internalized type τ› --- as constructor term for
  datatype ML_typetyp.

   @{Type source}› refers to embedded source text to produce a type
  constructor with name (formally checked) and arguments (inlined ML text).
  The embedded source› follows the syntax category @{syntax type_const}
  defined below.

   @{Type_fn source}› is similar to @{Type source}›, but produces a partial
  ML function that matches against a type constructor pattern, following
  syntax category @{syntax type_const_fn} below.


  rail@{syntax_def type_const}: @{syntax name} (@{syntax embedded_ml}*)
    ;
    @{syntax_def type_const_fn}: @{syntax type_const} @'=>' @{syntax embedded_ml}
    ;
    @{syntax_def embedded_ml}:
      @'_' | @{syntax embedded} | @{syntax control_symbol} @{syntax embedded}

  The text provided as @{syntax embedded_ml} is treated as regular Isabelle/ML
  source, possibly with nested antiquotations. ML text that only consists of a
  single antiquotation in compact control-cartouche notation, can be written
  without an extra level of nesting embedded text (see the last example
  below).
›

text %mlex ‹
  Here are some minimal examples for type constructor antiquotations.
›

ML ― ‹type constructor without arguments:›
  val natT = Typenat;

  ― ‹type constructor with arguments:›
  fun mk_funT (A, B) = Typefun A B;

  ― ‹type constructor decomposition as partial function:›
  val dest_funT = Type_fnfun A B => (A, B);

  ― ‹nested type constructors:›
  fun mk_relT A = Typefun A Typefun A Typebool;

  assert (mk_relT natT = typnat  nat  bool);


section ‹Terms \label{sec:terms}›

text ‹
  The language of terms is that of simply-typed λ›-calculus with de-Bruijn
  indices for bound variables (cf.\ citedebruijn72 or cite"paulson-ml2"), with the types being determined by the corresponding
  binders. In contrast, free variables and constants have an explicit name and
  type in each occurrence.

  
  A ‹bound variable› is a natural number b›, which accounts for the number
  of intermediate binders between the variable occurrence in the body and its
  binding position. For example, the de-Bruijn term λbool. λbool. 1 ∧ 0›
  would correspond to λxbool. λybool. x ∧ y› in a named representation.
  Note that a bound variable may be represented by different de-Bruijn indices
  at different occurrences, depending on the nesting of abstractions.

  A ‹loose variable› is a bound variable that is outside the scope of local
  binders. The types (and names) for loose variables can be managed as a
  separate context, that is maintained as a stack of hypothetical binders. The
  core logic operates on closed terms, without any loose variables.

  A ‹fixed variable› is a pair of a basic name and a type, e.g.\ (x, τ)›
  which is usually printed xτ here. A ‹schematic variable› is a pair of an
  indexname and a type, e.g.\ ((x, 0), τ)› which is likewise printed as
  ?xτ.

  
  A ‹constant› is a pair of a basic name and a type, e.g.\ (c, τ)› which is
  usually printed as cτ here. Constants are declared in the context as
  polymorphic families c :: σ›, meaning that all substitution instances cτ
  for τ = σ\<vartheta>› are valid.

  The vector of ‹type arguments› of constant cτ wrt.\ the declaration c
  :: σ› is defined as the codomain of the matcher \<vartheta> = {?α1 ↦ τ1,
  …, ?αn ↦ τn}› presented in canonical order 1, …, τn)›, corresponding
  to the left-to-right occurrences of the αi in σ›. Within a given theory
  context, there is a one-to-one correspondence between any constant cτ and
  the application c(τ1, …, τn)› of its type arguments. For example, with
  plus :: α ⇒ α ⇒ α›, the instance plusnat ⇒ nat ⇒ nat corresponds to
  plus(nat)›.

  Constant declarations c :: σ› may contain sort constraints for type
  variables in σ›. These are observed by type-inference as expected, but
  ‹ignored› by the core logic. This means the primitive logic is able to
  reason with instances of polymorphic constants that the user-level
  type-checker would reject due to violation of type class restrictions.

  
  An ‹atomic term› is either a variable or constant. The logical category
  ‹term› is defined inductively over atomic terms, with abstraction and
  application as follows: t = b | xτ | ?xτ | cτ | λτ. t | t1 t2.
  Parsing and printing takes care of converting between an external
  representation with named bound variables. Subsequently, we shall use the
  latter notation instead of internal de-Bruijn representation.

  The inductive relation t :: τ› assigns a (unique) type to a term according
  to the structure of atomic terms, abstractions, and applications:
  \[
  \infer{aτ :: τ›}{}
  \qquad
  \infer{(λxτ. t) :: τ ⇒ σ›}{t :: σ›}
  \qquad
  \infer{t u :: σ›}{t :: τ ⇒ σ› & u :: τ›}
  \]
  A ‹well-typed term› is a term that can be typed according to these rules.

  Typing information can be omitted: type-inference is able to reconstruct the
  most general type of a raw term, while assigning most general types to all
  of its variables and constants. Type-inference depends on a context of type
  constraints for fixed variables, and declarations for polymorphic constants.

  The identity of atomic terms consists both of the name and the type
  component. This means that different variables xτ1 and xτ2 may
  become the same after type instantiation. Type-inference rejects variables
  of the same name, but different types. In contrast, mixed instances of
  polymorphic constants occur routinely.

  
  The ‹hidden polymorphism› of a term t :: σ› is the set of type variables
  occurring in t›, but not in its type σ›. This means that the term
  implicitly depends on type arguments that are not accounted in the result
  type, i.e.\ there are different type instances t\<vartheta> :: σ› and
  t\<vartheta>' :: σ› with the same type. This slightly pathological
  situation notoriously demands additional care.

  
  A ‹term abbreviation› is a syntactic definition cσ ≡ t› of a closed term
  t› of type σ›, without any hidden polymorphism. A term abbreviation looks
  like a constant in the syntax, but is expanded before entering the logical
  core. Abbreviations are usually reverted when printing terms, using t →
  cσ as rules for higher-order rewriting.

  
  Canonical operations on λ›-terms include αβη›-conversion: α›-conversion
  refers to capture-free renaming of bound variables; β›-conversion contracts
  an abstraction applied to an argument term, substituting the argument in the
  body: (λx. b)a› becomes b[a/x]›; η›-conversion contracts vacuous
  application-abstraction: λx. f x› becomes f›, provided that the bound
  variable does not occur in f›.

  Terms are normally treated modulo α›-conversion, which is implicit in the
  de-Bruijn representation. Names for bound variables in abstractions are
  maintained separately as (meaningless) comments, mostly for parsing and
  printing. Full αβη›-conversion is commonplace in various standard
  operations (\secref{sec:obj-rules}) that are based on higher-order
  unification and matching.
›

text %mlref ‹
  \begin{mldecls}
  @{define_ML_type term} \\
  @{define_ML_infix "aconv": "term * term -> bool"} \\
  @{define_ML Term.map_types: "(typ -> typ) -> term -> term"} \\
  @{define_ML Term.fold_types: "(typ -> 'a -> 'a) -> term -> 'a -> 'a"} \\
  @{define_ML Term.map_aterms: "(term -> term) -> term -> term"} \\
  @{define_ML Term.fold_aterms: "(term -> 'a -> 'a) -> term -> 'a -> 'a"} \\
  \end{mldecls}
  \begin{mldecls}
  @{define_ML fastype_of: "term -> typ"} \\
  @{define_ML lambda: "term -> term -> term"} \\
  @{define_ML betapply: "term * term -> term"} \\
  @{define_ML incr_boundvars: "int -> term -> term"} \\
  @{define_ML Sign.declare_const: "Proof.context ->
  (binding * typ) * mixfix -> theory -> term * theory"} \\
  @{define_ML Sign.add_abbrev: "string -> binding * term ->
  theory -> (term * term) * theory"} \\
  @{define_ML Sign.const_typargs: "theory -> string * typ -> typ list"} \\
  @{define_ML Sign.const_instance: "theory -> string * typ list -> typ"} \\
  \end{mldecls}

   Type ML_typeterm represents de-Bruijn terms, with comments in
  abstractions, and explicitly named free variables and constants; this is a
  datatype with constructors @{define_ML Bound}, @{define_ML Free}, @{define_ML
  Var}, @{define_ML Const}, @{define_ML Abs}, @{define_ML_infix "$"}.

   t›~ML_text‹aconv›~u› checks α›-equivalence of two terms. This is the
  basic equality relation on type ML_typeterm; raw datatype equality
  should only be used for operations related to parsing or printing!

   MLTerm.map_types~f t› applies the mapping f› to all types occurring
  in t›.

   MLTerm.fold_types~f t› iterates the operation f› over all
  occurrences of types in t›; the term structure is traversed from left to
  right.

   MLTerm.map_aterms~f t› applies the mapping f› to all atomic terms
  (MLBound, MLFree, MLVar, MLConst) occurring in t›.

   MLTerm.fold_aterms~f t› iterates the operation f› over all
  occurrences of atomic terms (MLBound, MLFree, MLVar, MLConst) in t›; the term structure is traversed from left to right.

   MLfastype_of~t› determines the type of a well-typed term. This
  operation is relatively slow, despite the omission of any sanity checks.

   MLlambda~a b› produces an abstraction λa. b›, where occurrences of
  the atomic term a› in the body b› are replaced by bound variables.

   MLbetapply~(t, u)› produces an application t u›, with topmost
  β›-conversion if t› is an abstraction.

   MLincr_boundvars~j› increments a term's dangling bound variables by
  the offset j›. This is required when moving a subterm into a context where
  it is enclosed by a different number of abstractions. Bound variables with a
  matching abstraction are unaffected.

   MLSign.declare_const~ctxt ((c, σ), mx)› declares a new constant c ::
  σ› with optional mixfix syntax.

   MLSign.add_abbrev~print_mode (c, t)› introduces a new term
  abbreviation c ≡ t›.

   MLSign.const_typargs~thy (c, τ)› and MLSign.const_instance~thy
  (c, [τ1, …, τn])› convert between two representations of polymorphic
  constants: full type instance vs.\ compact type arguments form.
›

text %mlantiq ‹
  \begin{matharray}{rcl}
  @{ML_antiquotation_def "const_name"} & : & ML_antiquotation› \\
  @{ML_antiquotation_def "const_abbrev"} & : & ML_antiquotation› \\
  @{ML_antiquotation_def "const"} & : & ML_antiquotation› \\
  @{ML_antiquotation_def "term"} & : & ML_antiquotation› \\
  @{ML_antiquotation_def "prop"} & : & ML_antiquotation› \\
  @{ML_antiquotation_def "Const"} & : & ML_antiquotation› \\
  @{ML_antiquotation_def "Const_"} & : & ML_antiquotation› \\
  @{ML_antiquotation_def "Const_fn"} & : & ML_antiquotation› \\
  \end{matharray}

  rail‹
  (@@{ML_antiquotation const_name} |
   @@{ML_antiquotation const_abbrev}) embedded
  ;
  @@{ML_antiquotation const} ('(' (type + ',') ')')?
  ;
  @@{ML_antiquotation term} term
  ;
  @@{ML_antiquotation prop} prop
  ;
  (@@{ML_antiquotation Const} | @@{ML_antiquotation Const_} | @@{ML_antiquotation Const_fn})
    embedded
  ›

   @{const_name c}› inlines the internalized logical constant name c› ---
  as ML_typestring literal.

   @{const_abbrev c}› inlines the internalized abbreviated constant name c›
  --- as ML_typestring literal.

   @{const c(vecτ)}› inlines the internalized constant c› with precise
  type instantiation in the sense of MLSign.const_instance --- as MLConst constructor term for datatype ML_typeterm.

   @{term t}› inlines the internalized term t› --- as constructor term for
  datatype ML_typeterm.

   @{prop φ}› inlines the internalized proposition φ› --- as constructor
  term for datatype ML_typeterm.

   @{Const source}› refers to embedded source text to produce a term
  constructor with name (formally checked), type arguments and term arguments
  (inlined ML text). The embedded source› follows the syntax category
  @{syntax term_const} defined below, using @{syntax embedded_ml} from
  antiquotation @{ML_antiquotation Type} (\secref{sec:types}).

   @{Const_ source}› is similar to @{Const source}› for patterns instead of
  closed values. This distinction is required due to redundant type
  information within term constants: subtrees with duplicate ML pattern
  variable need to be suppressed (replaced by dummy patterns). The embedded source› follows
  the syntax category @{syntax term_const} defined below.

   @{Const_fn source}› is similar to @{Const_ source}›, but produces a
  proper ‹fn› expression with function body. The embedded source› follows
  the syntax category @{syntax term_const_fn} defined below.


  rail@{syntax_def term_const}:
      @{syntax name} (@{syntax embedded_ml}*) (@{syntax for_args})?
    ;
    @{syntax_def term_const_fn}:
      @{syntax term_const} @'=>' @{syntax embedded_ml}
    ;
    @{syntax_def for_args}: @'for' (@{syntax embedded_ml}+)
  ›

text %mlex ‹
  Here are some minimal examples for term constant antiquotations. Type
  arguments for constants are analogous to type constructors
  (\secref{sec:types}). Term arguments are different: a flexible number of
  arguments is inserted via curried applications (MLop $).
›

ML ― ‹constant without type argument:›
  fun mk_conj (A, B) = Constconj for A B;
  val dest_conj = Const_fnconj for A B => (A, B);

  ― ‹constant with type argument:›
  fun mk_eq T (t, u) = ConstHOL.eq T for t u;
  val dest_eq = Const_fnHOL.eq T for t u => (T, (t, u));

  ― ‹constant with variable number of term arguments:›
  val c = Const (const_nameconj, typbool  bool  bool);
  val P = termP::bool;
  val Q = termQ::bool;
  assert (Constconj = c);
  assert (Constconj for P = c $ P);
  assert (Constconj for P Q = c $ P $ Q);


section ‹Theorems \label{sec:thms}›

text ‹
  A ‹proposition› is a well-typed term of type prop›, a ‹theorem› is a
  proven proposition (depending on a context of hypotheses and the background
  theory). Primitive inferences include plain Natural Deduction rules for the
  primary connectives ⋀› and ⟹› of the framework. There is also a builtin
  notion of equality/equivalence ≡›.
›


subsection ‹Primitive connectives and rules \label{sec:prim-rules}›

text ‹
  The theory Pure› contains constant declarations for the primitive
  connectives ⋀›, ⟹›, and ≡› of the logical framework, see
  \figref{fig:pure-connectives}. The derivability judgment A1, …, An ⊢ B›
  is defined inductively by the primitive inferences given in
  \figref{fig:prim-rules}, with the global restriction that the hypotheses
  must ‹not› contain any schematic variables. The builtin equality is
  conceptually axiomatized as shown in \figref{fig:pure-equality}, although
  the implementation works directly with derived inferences.

  \begin{figure}[htb]
  \begin{center}
  \begin{tabular}{ll}
  all :: (α ⇒ prop) ⇒ prop› & universal quantification (binder ⋀›) \\
  ⟹ :: prop ⇒ prop ⇒ prop› & implication (right associative infix) \\
  ≡ :: α ⇒ α ⇒ prop› & equality relation (infix) \\
  \end{tabular}
  \caption{Primitive connectives of Pure}\label{fig:pure-connectives}
  \end{center}
  \end{figure}

  \begin{figure}[htb]
  \begin{center}
  \[
  \infer[(axiom)›]{⊢ A›}{A ∈ Θ›}
  \qquad
  \infer[(assume)›]{A ⊢ A›}{}
  \]
  \[
  \infer[(⋀‐intro)›]{Γ ⊢ ⋀x. B[x]›}{Γ ⊢ B[x]› & x ∉ Γ›}
  \qquad
  \infer[(⋀‐elim)›]{Γ ⊢ B[a]›}{Γ ⊢ ⋀x. B[x]›}
  \]
  \[
  \infer[(⟹‐intro)›]{Γ - A ⊢ A ⟹ B›}{Γ ⊢ B›}
  \qquad
  \infer[(⟹‐elim)›]{Γ1 ∪ Γ2 ⊢ B›}{Γ1 ⊢ A ⟹ B› & Γ2 ⊢ A›}
  \]
  \caption{Primitive inferences of Pure}\label{fig:prim-rules}
  \end{center}
  \end{figure}

  \begin{figure}[htb]
  \begin{center}
  \begin{tabular}{ll}
  ⊢ (λx. b[x]) a ≡ b[a]› & β›-conversion \\
  ⊢ x ≡ x› & reflexivity \\
  ⊢ x ≡ y ⟹ P x ⟹ P y› & substitution \\
  ⊢ (⋀x. f x ≡ g x) ⟹ f ≡ g› & extensionality \\
  ⊢ (A ⟹ B) ⟹ (B ⟹ A) ⟹ A ≡ B› & logical equivalence \\
  \end{tabular}
  \caption{Conceptual axiomatization of Pure equality}\label{fig:pure-equality}
  \end{center}
  \end{figure}

  The introduction and elimination rules for ⋀› and ⟹› are analogous to
  formation of dependently typed λ›-terms representing the underlying proof
  objects. Proof terms are irrelevant in the Pure logic, though; they cannot
  occur within propositions. The system provides a runtime option to record
  explicit proof terms for primitive inferences, see also
  \secref{sec:proof-terms}. Thus all three levels of λ›-calculus become
  explicit: ⇒› for terms, and ⋀/⟹› for proofs (cf.\ cite"Berghofer-Nipkow:2000:TPHOL").

  Observe that locally fixed parameters (as in ⋀‐intro›) need not be recorded
  in the hypotheses, because the simple syntactic types of Pure are always
  inhabitable. ``Assumptions'' x :: τ› for type-membership are only present
  as long as some xτ occurs in the statement body.‹This is the key
  difference to ``λHOL›'' in the PTS framework cite"Barendregt-Geuvers:2001", where hypotheses x : A› are treated uniformly
  for propositions and types.›

  
  The axiomatization of a theory is implicitly closed by forming all instances
  of type and term variables: ⊢ A\<vartheta>› holds for any substitution
  instance of an axiom ⊢ A›. By pushing substitutions through derivations
  inductively, we also get admissible generalize› and instantiate› rules as
  shown in \figref{fig:subst-rules}.

  \begin{figure}[htb]
  \begin{center}
  \[
  \infer{Γ ⊢ B[?α]›}{Γ ⊢ B[α]› & α ∉ Γ›}
  \quad
  \infer[\quad(generalize)›]{Γ ⊢ B[?x]›}{Γ ⊢ B[x]› & x ∉ Γ›}
  \]
  \[
  \infer{Γ ⊢ B[τ]›}{Γ ⊢ B[?α]›}
  \quad
  \infer[\quad(instantiate)›]{Γ ⊢ B[t]›}{Γ ⊢ B[?x]›}
  \]
  \caption{Admissible substitution rules}\label{fig:subst-rules}
  \end{center}
  \end{figure}

  Note that instantiate› does not require an explicit side-condition, because
  Γ› may never contain schematic variables.

  In principle, variables could be substituted in hypotheses as well, but this
  would disrupt the monotonicity of reasoning: deriving Γ\<vartheta> ⊢
  B\<vartheta>› from Γ ⊢ B› is correct, but Γ\<vartheta> ⊇ Γ› does not
  necessarily hold: the result belongs to a different proof context.

   An ‹oracle› is a function that produces axioms on the fly. Logically,
  this is an instance of the axiom› rule (\figref{fig:prim-rules}), but there
  is an operational difference. The inference kernel records oracle
  invocations within derivations of theorems by a unique tag. This also
  includes implicit type-class reasoning via the order-sorted algebra of class
  relations and type arities (see also @{command_ref instantiation} and
  @{command_ref instance}).

  Axiomatizations should be limited to the bare minimum, typically as part of
  the initial logical basis of an object-logic formalization. Later on,
  theories are usually developed in a strictly definitional fashion, by
  stating only certain equalities over new constants.

  A ‹simple definition› consists of a constant declaration c :: σ› together
  with an axiom ⊢ c ≡ t›, where t :: σ› is a closed term without any hidden
  polymorphism. The RHS may depend on further defined constants, but not c›
  itself. Definitions of functions may be presented as c vecx ≡ t›
  instead of the puristic c ≡ λvecx. t›.

  An ‹overloaded definition› consists of a collection of axioms for the same
  constant, with zero or one equations c((vecα)κ) ≡ t› for each type
  constructor κ› (for distinct variables vecα›). The RHS may mention
  previously defined constants as above, or arbitrary constants d(αi)› for
  some αi projected from vecα›. Thus overloaded definitions
  essentially work by primitive recursion over the syntactic structure of a
  single type argument. See also cite‹\S4.3› in
  "Haftmann-Wenzel:2006:classes".
›

text %mlref ‹
  \begin{mldecls}
  @{define_ML Logic.all: "term -> term -> term"} \\
  @{define_ML Logic.mk_implies: "term * term -> term"} \\
  \end{mldecls}
  \begin{mldecls}
  @{define_ML_type ctyp} \\
  @{define_ML_type cterm} \\
  @{define_ML Thm.ctyp_of: "Proof.context -> typ -> ctyp"} \\
  @{define_ML Thm.cterm_of: "Proof.context -> term -> cterm"} \\
  @{define_ML Thm.apply: "cterm -> cterm -> cterm"} \\
  @{define_ML Thm.lambda: "cterm -> cterm -> cterm"} \\
  @{define_ML Thm.all: "Proof.context -> cterm -> cterm -> cterm"} \\
  @{define_ML Drule.mk_implies: "cterm * cterm -> cterm"} \\
  \end{mldecls}
  \begin{mldecls}
  @{define_ML_type thm} \\
  @{define_ML Thm.transfer: "theory -> thm -> thm"} \\
  @{define_ML Thm.assume: "cterm -> thm"} \\
  @{define_ML Thm.forall_intr: "cterm -> thm -> thm"} \\
  @{define_ML Thm.forall_elim: "cterm -> thm -> thm"} \\
  @{define_ML Thm.implies_intr: "cterm -> thm -> thm"} \\
  @{define_ML Thm.implies_elim: "thm -> thm -> thm"} \\
  @{define_ML Thm.generalize: "Names.set * Names.set -> int -> thm -> thm"} \\
  @{define_ML Thm.instantiate: "ctyp TVars.table * cterm Vars.table -> thm -> thm"} \\
  @{define_ML Thm.add_axiom: "Proof.context ->
  binding * term -> theory -> (string * thm) * theory"} \\
  @{define_ML Thm.add_oracle: "binding * ('a -> cterm) -> theory ->
  (string * ('a -> thm)) * theory"} \\
  @{define_ML Thm.add_def: "Defs.context -> bool -> bool ->
  binding * term -> theory -> (string * thm) * theory"} \\
  \end{mldecls}
  \begin{mldecls}
  @{define_ML Theory.add_deps: "Defs.context -> string ->
  Defs.entry -> Defs.entry list -> theory -> theory"} \\
  @{define_ML Thm_Deps.all_oracles: "thm list -> Proofterm.oracle list"} \\
  \end{mldecls}

   MLLogic.all~a B› produces a Pure quantification ⋀a. B›, where
  occurrences of the atomic term a› in the body proposition B› are replaced
  by bound variables. (See also MLlambda on terms.)

   MLLogic.mk_implies~(A, B)› produces a Pure implication A ⟹ B›.

   Types ML_typectyp and ML_typecterm represent certified types and
  terms, respectively. These are abstract datatypes that guarantee that its
  values have passed the full well-formedness (and well-typedness) checks,
  relative to the declarations of type constructors, constants etc.\ in the
  background theory. The abstract types ML_typectyp and ML_typecterm
  are part of the same inference kernel that is mainly responsible for
  ML_typethm. Thus syntactic operations on ML_typectyp and ML_typecterm are located in the ML_structureThm module, even though theorems
  are not yet involved at that stage.

   MLThm.ctyp_of~ctxt τ› and MLThm.cterm_of~ctxt t› explicitly
  check types and terms, respectively. This also involves some basic
  normalizations, such expansion of type and term abbreviations from the
  underlying theory context. Full re-certification is relatively slow and
  should be avoided in tight reasoning loops.

   MLThm.apply, MLThm.lambda, MLThm.all, MLDrule.mk_implies
  etc.\ compose certified terms (or propositions) incrementally. This is
  equivalent to MLThm.cterm_of after unchecked ML_infix$, MLlambda,
  MLLogic.all, MLLogic.mk_implies etc., but there can be a big
  difference in performance when large existing entities are composed by a few
  extra constructions on top. There are separate operations to decompose
  certified terms and theorems to produce certified terms again.

   Type ML_typethm represents proven propositions. This is an abstract
  datatype that guarantees that its values have been constructed by basic
  principles of the ML_structureThm module. Every ML_typethm value
  refers its background theory, cf.\ \secref{sec:context-theory}.

   MLThm.transfer~thy thm› transfers the given theorem to a ‹larger›
  theory, see also \secref{sec:context}. This formal adjustment of the
  background context has no logical significance, but is occasionally required
  for formal reasons, e.g.\ when theorems that are imported from more basic
  theories are used in the current situation.

   MLThm.assume, MLThm.forall_intr, MLThm.forall_elim, MLThm.implies_intr, and MLThm.implies_elim correspond to the primitive
  inferences of \figref{fig:prim-rules}.

   MLThm.generalize~(vecα, vecx)› corresponds to the
  generalize› rules of \figref{fig:subst-rules}. Here collections of type and
  term variables are generalized simultaneously, specified by the given sets of
  basic names.

   MLThm.instantiate~(vecαs, vecxτ)› corresponds to the
  instantiate› rules of \figref{fig:subst-rules}. Type variables are
  substituted before term variables. Note that the types in vecxτ refer
  to the instantiated versions.

   MLThm.add_axiom~ctxt (name, A)› declares an arbitrary proposition as
  axiom, and retrieves it as a theorem from the resulting theory, cf.\ axiom›
  in \figref{fig:prim-rules}. Note that the low-level representation in the
  axiom table may differ slightly from the returned theorem.

   MLThm.add_oracle~(binding, oracle)› produces a named oracle rule,
  essentially generating arbitrary axioms on the fly, cf.\ axiom› in
  \figref{fig:prim-rules}.

   MLThm.add_def~ctxt unchecked overloaded (name, c vecx ≡ t)›
  states a definitional axiom for an existing constant c›. Dependencies are
  recorded via MLTheory.add_deps, unless the unchecked› option is set.
  Note that the low-level representation in the axiom table may differ
  slightly from the returned theorem.

   MLTheory.add_deps~ctxt name cτ vecdσ declares dependencies of
  a named specification for constant cτ, relative to existing
  specifications for constants vecdσ. This also works for type
  constructors.

   MLThm_Deps.all_oracles~thms› returns all oracles used in the
  internal derivation of the given theorems; this covers the full graph of
  transitive dependencies. The result contains an authentic oracle name; if
  @{ML Proofterm.proofs} was at least @{ML 1} during the oracle inference, it
  also contains the position of the oracle invocation and its proposition. See
  also the command @{command_ref "thm_oracles"}.
›

text %mlantiq ‹
  \begin{matharray}{rcl}
  @{ML_antiquotation_def "ctyp"} & : & ML_antiquotation› \\
  @{ML_antiquotation_def "cterm"} & : & ML_antiquotation› \\
  @{ML_antiquotation_def "cprop"} & : & ML_antiquotation› \\
  @{ML_antiquotation_def "thm"} & : & ML_antiquotation› \\
  @{ML_antiquotation_def "thms"} & : & ML_antiquotation› \\
  @{ML_antiquotation_def "lemma"} & : & ML_antiquotation› \\
  @{ML_antiquotation_def "oracle_name"} & : & ML_antiquotation› \\
  \end{matharray}

  rail@@{ML_antiquotation ctyp} typ
  ;
  @@{ML_antiquotation cterm} term
  ;
  @@{ML_antiquotation cprop} prop
  ;
  @@{ML_antiquotation thm} thm
  ;
  @@{ML_antiquotation thms} thms
  ;
  @@{ML_antiquotation lemma} ('(' @'open' ')')? ((prop +) + @'and') 
    @{syntax for_fixes} @'by' method method?
  ;
  @@{ML_antiquotation oracle_name} embedded
  ›

   @{ctyp τ}› produces a certified type wrt.\ the current background theory
  --- as abstract value of type ML_typectyp.

   @{cterm t}› and @{cprop φ}› produce a certified term wrt.\ the current
  background theory --- as abstract value of type ML_typecterm.

   @{thm a}› produces a singleton fact --- as abstract value of type
  ML_typethm.

   @{thms a}› produces a general fact --- as abstract value of type
  ML_typethm list.

   @{lemma φ by meth}› produces a fact that is proven on the spot according
  to the minimal proof, which imitates a terminal Isar proof. The result is an
  abstract value of type ML_typethm or ML_typethm list, depending on
  the number of propositions given here.

  The internal derivation object lacks a proper theorem name, but it is
  formally closed, unless the (open)› option is specified (this may impact
  performance of applications with proof terms).

  Since ML antiquotations are always evaluated at compile-time, there is no
  run-time overhead even for non-trivial proofs. Nonetheless, the
  justification is syntactically limited to a single @{command "by"} step.
  More complex Isar proofs should be done in regular theory source, before
  compiling the corresponding ML text that uses the result.

   @{oracle_name a}› inlines the internalized oracle name a› --- as
  ML_typestring literal.
›


subsection ‹Auxiliary connectives \label{sec:logic-aux}›

text ‹
  Theory Pure› provides a few auxiliary connectives that are defined on top
  of the primitive ones, see \figref{fig:pure-aux}. These special constants
  are useful in certain internal encodings, and are normally not directly
  exposed to the user.

  \begin{figure}[htb]
  \begin{center}
  \begin{tabular}{ll}
  conjunction :: prop ⇒ prop ⇒ prop› & (infix &&&›) \\
  ⊢ A &&& B ≡ (⋀C. (A ⟹ B ⟹ C) ⟹ C)› \\[1ex]
  prop :: prop ⇒ prop› & (prefix #›, suppressed) \\
  #A ≡ A› \\[1ex]
  term :: α ⇒ prop› & (prefix TERM›) \\
  term x ≡ (⋀A. A ⟹ A)› \\[1ex]
  type :: α itself› & (prefix TYPE›) \\
  (unspecified)› \\
  \end{tabular}
  \caption{Definitions of auxiliary connectives}\label{fig:pure-aux}
  \end{center}
  \end{figure}

  The introduction A ⟹ B ⟹ A &&& B›, and eliminations (projections) A &&& B
  ⟹ A› and A &&& B ⟹ B› are available as derived rules. Conjunction allows to
  treat simultaneous assumptions and conclusions uniformly, e.g.\ consider A
  ⟹ B ⟹ C &&& D›. In particular, the goal mechanism represents multiple claims
  as explicit conjunction internally, but this is refined (via backwards
  introduction) into separate sub-goals before the user commences the proof;
  the final result is projected into a list of theorems using eliminations
  (cf.\ \secref{sec:tactical-goals}).

  The prop› marker (#›) makes arbitrarily complex propositions appear as
  atomic, without changing the meaning: Γ ⊢ A› and Γ ⊢ #A› are
  interchangeable. See \secref{sec:tactical-goals} for specific operations.

  The term› marker turns any well-typed term into a derivable proposition: ⊢
  TERM t› holds unconditionally. Although this is logically vacuous, it allows
  to treat terms and proofs uniformly, similar to a type-theoretic framework.

  The TYPE› constructor is the canonical representative of the unspecified
  type α itself›; it essentially injects the language of types into that of
  terms. There is specific notation TYPE(τ)› for TYPEτ itself. Although
  being devoid of any particular meaning, the term TYPE(τ)› accounts for the
  type τ› within the term language. In particular, TYPE(α)› may be used as
  formal argument in primitive definitions, in order to circumvent hidden
  polymorphism (cf.\ \secref{sec:terms}). For example, c TYPE(α) ≡ A[α]›
  defines c :: α itself ⇒ prop› in terms of a proposition A› that depends on
  an additional type argument, which is essentially a predicate on types.
›

text %mlref ‹
  \begin{mldecls}
  @{define_ML Conjunction.intr: "thm -> thm -> thm"} \\
  @{define_ML Conjunction.elim: "thm -> thm * thm"} \\
  @{define_ML Drule.mk_term: "cterm -> thm"} \\
  @{define_ML Drule.dest_term: "thm -> cterm"} \\
  @{define_ML Logic.mk_type: "typ -> term"} \\
  @{define_ML Logic.dest_type: "term -> typ"} \\
  \end{mldecls}

   MLConjunction.intr derives A &&& B› from A› and B›.

   MLConjunction.elim derives A› and B› from A &&& B›.

   MLDrule.mk_term derives TERM t›.

   MLDrule.dest_term recovers term t› from TERM t›.

   MLLogic.mk_type~τ› produces the term TYPE(τ)›.

   MLLogic.dest_type~TYPE(τ)› recovers the type τ›.
›


subsection ‹Sort hypotheses›

text ‹
  Type variables are decorated with sorts, as explained in \secref{sec:types}.
  This constrains type instantiation to certain ranges of types: variable
  αs may only be assigned to types τ› that belong to sort s›. Within the
  logic, sort constraints act like implicit preconditions on the result ⦇α1
  : s1⦈, …, ⦇αn : sn⦈, Γ ⊢ φ› where the type variables α1, …, αn cover
  the propositions Γ›, φ›, as well as the proof of Γ ⊢ φ›.

  These ‹sort hypothesis› of a theorem are passed monotonically through
  further derivations. They are redundant, as long as the statement of a
  theorem still contains the type variables that are accounted here. The
  logical significance of sort hypotheses is limited to the boundary case
  where type variables disappear from the proposition, e.g.\ ⦇αs : s⦈ ⊢ φ›.
  Since such dangling type variables can be renamed arbitrarily without
  changing the proposition φ›, the inference kernel maintains sort hypotheses
  in anonymous form s ⊢ φ›.

  In most practical situations, such extra sort hypotheses may be stripped in
  a final bookkeeping step, e.g.\ at the end of a proof: they are typically
  left over from intermediate reasoning with type classes that can be
  satisfied by some concrete type τ› of sort s› to replace the hypothetical
  type variable αs.
›

text %mlref ‹
  \begin{mldecls}
  @{define_ML Thm.extra_shyps: "thm -> sort list"} \\
  @{define_ML Thm.strip_shyps: "thm -> thm"} \\
  \end{mldecls}

   MLThm.extra_shyps~thm› determines the extraneous sort hypotheses of
  the given theorem, i.e.\ the sorts that are not present within type
  variables of the statement.

   MLThm.strip_shyps~thm› removes any extraneous sort hypotheses that
  can be witnessed from the type signature.
›

text %mlex ‹
  The following artificial example demonstrates the derivation of propFalse with a pending sort hypothesis involving a logically empty sort.
›

class empty =
  assumes bad: "(x::'a) y. x  y"

theorem (in empty) false: False
  using bad by blast

ML_val assert (Thm.extra_shyps @{thm false} = [sortempty])

text ‹
  Thanks to the inference kernel managing sort hypothesis according to their
  logical significance, this example is merely an instance of ‹ex falso
  quodlibet consequitur› --- not a collapse of the logical framework!
›


section ‹Object-level rules \label{sec:obj-rules}›

text ‹
  The primitive inferences covered so far mostly serve foundational purposes.
  User-level reasoning usually works via object-level rules that are
  represented as theorems of Pure. Composition of rules involves
  ‹backchaining›, ‹higher-order unification› modulo αβη›-conversion of
  λ›-terms, and so-called ‹lifting› of rules into a context of ⋀› and ⟹›
  connectives. Thus the full power of higher-order Natural Deduction in
  Isabelle/Pure becomes readily available.
›


subsection ‹Hereditary Harrop Formulae›

text ‹
  The idea of object-level rules is to model Natural Deduction inferences in
  the style of Gentzen cite"Gentzen:1935", but we allow arbitrary nesting
  similar to cite"Schroeder-Heister:1984". The most basic rule format is
  that of a ‹Horn Clause›:
  \[
  \infer{A›}{A1 & …› & An}
  \]
  where A, A1, …, An are atomic propositions of the framework, usually of
  the form Trueprop B›, where B› is a (compound) object-level statement.
  This object-level inference corresponds to an iterated implication in Pure
  like this:
  \[
  A1 ⟹ … An ⟹ A›
  \]
  As an example consider conjunction introduction: A ⟹ B ⟹ A ∧ B›. Any
  parameters occurring in such rule statements are conceptionally treated as
  arbitrary:
  \[
  ⋀x1 … xm. A1 x1 … xm ⟹ … An x1 … xm ⟹ A x1 … xm
  \]

  Nesting of rules means that the positions of Ai may again hold compound
  rules, not just atomic propositions. Propositions of this format are called
  ‹Hereditary Harrop Formulae› in the literature cite"Miller:1991". Here
  we give an inductive characterization as follows:

  
  \begin{tabular}{ll}
  x & set of variables \\
  A & set of atomic propositions \\
  H  =  ⋀x*. H*A & set of Hereditary Harrop Formulas \\
  \end{tabular}
  

  Thus we essentially impose nesting levels on propositions formed from ⋀›
  and ⟹›. At each level there is a prefix of parameters and compound
  premises, concluding an atomic proposition. Typical examples are
  ⟶›-introduction (A ⟹ B) ⟹ A ⟶ B› or mathematical induction P 0 ⟹ (⋀n. P n
  ⟹ P (Suc n)) ⟹ P n›. Even deeper nesting occurs in well-founded induction
  (⋀x. (⋀y. y ≺ x ⟹ P y) ⟹ P x) ⟹ P x›, but this already marks the limit of
  rule complexity that is usually seen in practice.

  
  Regular user-level inferences in Isabelle/Pure always maintain the following
  canonical form of results:

   Normalization by (A ⟹ (⋀x. B x)) ≡ (⋀x. A ⟹ B x)›, which is a theorem of
  Pure, means that quantifiers are pushed in front of implication at each
  level of nesting. The normal form is a Hereditary Harrop Formula.

   The outermost prefix of parameters is represented via schematic variables:
  instead of vecx. vecH vecx ⟹ A vecx› we have vecH
  ?vecx ⟹ A ?vecx›. Note that this representation looses information
  about the order of parameters, and vacuous quantifiers vanish automatically.
›

text %mlref ‹
  \begin{mldecls}
  @{define_ML Simplifier.norm_hhf: "Proof.context -> thm -> thm"} \\
  \end{mldecls}

   MLSimplifier.norm_hhf~ctxt thm› normalizes the given theorem
  according to the canonical form specified above. This is occasionally
  helpful to repair some low-level tools that do not handle Hereditary Harrop
  Formulae properly.
›


subsection ‹Rule composition›

text ‹
  The rule calculus of Isabelle/Pure provides two main inferences: @{inference
  resolution} (i.e.\ back-chaining of rules) and @{inference assumption}
  (i.e.\ closing a branch), both modulo higher-order unification. There are
  also combined variants, notably @{inference elim_resolution} and @{inference
  dest_resolution}.

  To understand the all-important @{inference resolution} principle, we first
  consider raw @{inference_def composition} (modulo higher-order unification
  with substitution \<vartheta>›):
  \[
  \infer[(@{inference_def composition})]{vecA\<vartheta> ⟹ C\<vartheta>›}
  {vecA ⟹ B› & B' ⟹ C› & B\<vartheta> = B'\<vartheta>›}
  \]
  Here the conclusion of the first rule is unified with the premise of the
  second; the resulting rule instance inherits the premises of the first and
  conclusion of the second. Note that C› can again consist of iterated
  implications. We can also permute the premises of the second rule
  back-and-forth in order to compose with B'› in any position (subsequently
  we shall always refer to position 1 w.l.o.g.).

  In @{inference composition} the internal structure of the common part B›
  and B'› is not taken into account. For proper @{inference resolution} we
  require B› to be atomic, and explicitly observe the structure vecx.
  vecH vecx ⟹ B' vecx› of the premise of the second rule. The idea
  is to adapt the first rule by ``lifting'' it into this context, by means of
  iterated application of the following inferences:
  \[
  \infer[(@{inference_def imp_lift})]{(vecH ⟹ vecA) ⟹ (vecH ⟹ B)›}{vecA ⟹ B›}
  \]
  \[
  \infer[(@{inference_def all_lift})]{(⋀vecx. vecA (?veca vecx)) ⟹ (⋀vecx. B (?veca vecx))›}{vecA ?veca ⟹ B ?veca›}
  \]
  By combining raw composition with lifting, we get full @{inference
  resolution} as follows:
  \[
  \infer[(@{inference_def resolution})]
  {(⋀vecx. vecH vecx ⟹ vecA (?veca vecx))\<vartheta> ⟹ C\<vartheta>›}
  {\begin{tabular}{l}
    vecA ?veca ⟹ B ?veca› \\
    (⋀vecx. vecH vecx ⟹ B' vecx) ⟹ C› \\
    vecx. B (?veca vecx))\<vartheta> = B'\<vartheta>› \\
   \end{tabular}}
  \]

  Continued resolution of rules allows to back-chain a problem towards more
  and sub-problems. Branches are closed either by resolving with a rule of 0
  premises, or by producing a ``short-circuit'' within a solved situation
  (again modulo unification):
  \[
  \infer[(@{inference_def assumption})]{C\<vartheta>›}
  {(⋀vecx. vecH vecx ⟹ A vecx) ⟹ C› & A\<vartheta> = Hi\<vartheta>›~~\mbox{(for some~i›)}}
  \]

  %FIXME @{inference_def elim_resolution}, @{inference_def dest_resolution}

text %mlref ‹
  \begin{mldecls}
  @{define_ML_infix "RSN": "thm * (int * thm) -> thm"} \\
  @{define_ML_infix "RS": "thm * thm -> thm"} \\

  @{define_ML_infix "RLN": "thm list * (int * thm list) -> thm list"} \\
  @{define_ML_infix "RL": "thm list * thm list -> thm list"} \\

  @{define_ML_infix "MRS": "thm list * thm -> thm"} \\
  @{define_ML_infix "OF": "thm * thm list -> thm"} \\
  \end{mldecls}

   rule1 RSN (i, rule2)› resolves the conclusion of rule1 with the
  i›-th premise of rule2, according to the @{inference resolution}
  principle explained above. Unless there is precisely one resolvent it raises
  exception MLTHM.

  This corresponds to the rule attribute @{attribute THEN} in Isar source
  language.

   rule1 RS rule2 abbreviates rule1 RSN (1, rule2)›.

   rules1 RLN (i, rules2)› joins lists of rules. For every rule1 in
  rules1 and rule2 in rules2, it resolves the conclusion of rule1
  with the i›-th premise of rule2, accumulating multiple results in one
  big list. Note that such strict enumerations of higher-order unifications
  can be inefficient compared to the lazy variant seen in elementary tactics
  like MLresolve_tac.

   rules1 RL rules2 abbreviates rules1 RLN (1, rules2)›.

   [rule1, …, rulen] MRS rule› resolves rulei against premise i› of
  rule›, for i = n, …, 1›. By working from right to left, newly emerging
  premises are concatenated in the result, without interfering.

   rule OF rules› is an alternative notation for rules MRS rule›, which
  makes rule composition look more like function application. Note that the
  argument rules› need not be atomic.

  This corresponds to the rule attribute @{attribute OF} in Isar source
  language.
›


section ‹Proof terms \label{sec:proof-terms}›

text ‹
  The Isabelle/Pure inference kernel can record the proof of each theorem as a
  proof term that contains all logical inferences in detail. Rule composition
  by resolution (\secref{sec:obj-rules}) and type-class reasoning is broken
  down to primitive rules of the logical framework. The proof term can be
  inspected by a separate proof-checker, for example.

  According to the well-known ‹Curry-Howard isomorphism›, a proof can be
  viewed as a λ›-term. Following this idea, proofs in Isabelle are internally
  represented by a datatype similar to the one for terms described in
  \secref{sec:terms}. On top of these syntactic terms, two more layers of
  λ›-calculus are added, which correspond to ⋀x :: α. B x› and A ⟹ B›
  according to the propositions-as-types principle. The resulting 3-level
  λ›-calculus resembles ``λHOL›'' in the more abstract setting of Pure Type
  Systems (PTS) cite"Barendregt-Geuvers:2001", if some fine points like
  schematic polymorphism and type classes are ignored.

  
  ‹Proof abstractions› of the form λx :: α. prf› or λp : A. prf›
  correspond to introduction of ⋀›/⟹›, and ‹proof applications› of the form
  p ⋅ t› or p ∙ q› correspond to elimination of ⋀›/⟹›. Actual types α›,
  propositions A›, and terms t› might be suppressed and reconstructed from
  the overall proof term.

  
  Various atomic proofs indicate special situations within the proof
  construction as follows.

  A ‹bound proof variable› is a natural number b› that acts as de-Bruijn
  index for proof term abstractions.

  A ‹minimal proof› ``?›'' is a dummy proof term. This indicates some
  unrecorded part of the proof.

  Hyp A› refers to some pending hypothesis by giving its proposition. This
  indicates an open context of implicit hypotheses, similar to loose bound
  variables or free variables within a term (\secref{sec:terms}).

  An ‹axiom› or ‹oracle› a : A[vecτ]› refers some postulated proof
  constant›, which is subject to schematic polymorphism of theory content, and
  the particular type instantiation may be given explicitly. The vector of
  types vecτ› refers to the schematic type variables in the generic
  proposition A› in canonical order.

  A ‹proof promise› a : A[vecτ]› is a placeholder for some proof of
  polymorphic proposition A›, with explicit type instantiation as given by
  the vector vecτ›, as above. Unlike axioms or oracles, proof promises
  may be ‹fulfilled› eventually, by substituting a› by some particular proof
  q› at the corresponding type instance. This acts like Hindley-Milner
  let›-polymorphism: a generic local proof definition may get used at
  different type instances, and is replaced by the concrete instance
  eventually.

  A ‹named theorem› wraps up some concrete proof as a closed formal entity,
  in the manner of constant definitions for proof terms. The ‹proof body› of
  such boxed theorems involves some digest about oracles and promises
  occurring in the original proof. This allows the inference kernel to manage
  this critical information without the full overhead of explicit proof terms.
›


subsection ‹Reconstructing and checking proof terms›

text ‹
  Fully explicit proof terms can be large, but most of this information is
  redundant and can be reconstructed from the context. Therefore, the
  Isabelle/Pure inference kernel records only ‹implicit› proof terms, by
  omitting all typing information in terms, all term and type labels of proof
  abstractions, and some argument terms of applications p ⋅ t› (if possible).

  There are separate operations to reconstruct the full proof term later on,
  using ‹higher-order pattern unification› cite"nipkow-patterns" and
  "Berghofer-Nipkow:2000:TPHOL".

  The ‹proof checker› expects a fully reconstructed proof term, and can turn
  it into a theorem by replaying its primitive inferences within the kernel.
›


subsection ‹Concrete syntax of proof terms›

text ‹
  The concrete syntax of proof terms is a slight extension of the regular
  inner syntax of Isabelle/Pure cite"isabelle-isar-ref". Its main
  syntactic category @{syntax (inner) proof} is defined as follows:

  \begin{center}
  \begin{supertabular}{rclr}

  @{syntax_def (inner) proof} & = & λ params› ‹.› proof› \\
    & |› & proof› ⋅› any› \\
    & |› & proof› ∙› proof› \\
    & |› & id  |  longid› \\
  \\

  param› & = & idt› \\
    & |› & idt› ‹:› prop› \\
    & |› & ‹(› param› ‹)› \\
  \\

  params› & = & param› \\
    & |› & param› params› \\

  \end{supertabular}
  \end{center}

  Implicit term arguments in partial proofs are indicated by ``_›''. Type
  arguments for theorems and axioms may be specified using p ⋅ TYPE(type)›
  (they must appear before any other term argument of a theorem or axiom, but
  may be omitted altogether).

  
  There are separate read and print operations for proof terms, in order to
  avoid conflicts with the regular term language.
›

text %mlref ‹
  \begin{mldecls}
  @{define_ML_type proof} \\
  @{define_ML_type proof_body} \\
  @{define_ML Proofterm.proofs: "int Unsynchronized.ref"} \\
  @{define_ML Proofterm.reconstruct_proof:
  "theory -> term -> proof -> proof"} \\
  @{define_ML Proofterm.expand_proof: "theory ->
  (Proofterm.thm_header -> string option) -> proof -> proof"} \\
  @{define_ML Proof_Checker.thm_of_proof: "theory -> proof -> thm"} \\
  @{define_ML Proof_Syntax.read_proof: "theory -> bool -> bool -> string -> proof"} \\
  @{define_ML Proof_Syntax.pretty_proof: "Proof.context -> proof -> Pretty.T"} \\
  \end{mldecls}

   Type ML_typeproof represents proof terms; this is a datatype with
  constructors @{define_ML Abst}, @{define_ML AbsP}, @{define_ML_infix "%"},
  @{define_ML_infix "%%"}, @{define_ML PBound}, @{define_ML MinProof}, @{define_ML
  Hyp}, @{define_ML PAxm}, @{define_ML Oracle}, @{define_ML PThm} as explained
  above. %FIXME PClass (!?)

   Type ML_typeproof_body represents the nested proof information of a
  named theorem, consisting of a digest of oracles and named theorem over some
  proof term. The digest only covers the directly visible part of the proof:
  in order to get the full information, the implicit graph of nested theorems
  needs to be traversed (e.g.\ using MLProofterm.fold_body_thms).

   MLThm.proof_of~thm› and MLThm.proof_body_of~thm› produce the
  proof term or proof body (with digest of oracles and theorems) from a given
  theorem. Note that this involves a full join of internal futures that
  fulfill pending proof promises, and thus disrupts the natural bottom-up
  construction of proofs by introducing dynamic ad-hoc dependencies. Parallel
  performance may suffer by inspecting proof terms at run-time.

   MLProofterm.proofs specifies the detail of proof recording within
  ML_typethm values produced by the inference kernel: ML0 records only
  the names of oracles, ML1 records oracle names and propositions, ML2
  additionally records full proof terms. Officially named theorems that
  contribute to a result are recorded in any case.

   MLProofterm.reconstruct_proof~thy prop prf› turns the implicit
  proof term prf› into a full proof of the given proposition.

  Reconstruction may fail if prf› is not a proof of prop›, or if it does not
  contain sufficient information for reconstruction. Failure may only happen
  for proofs that are constructed manually, but not for those produced
  automatically by the inference kernel.

   MLProofterm.expand_proof~thy expand prf› reconstructs and expands
  the proofs of nested theorems according to the given expand› function: a
  result of @{ML SOME ""} means full expansion, @{ML SOME}~name› means to
  keep the theorem node but replace its internal name, @{ML NONE} means no
  change.

   MLProof_Checker.thm_of_proof~thy prf› turns the given (full) proof
  into a theorem, by replaying it using only primitive rules of the inference
  kernel.

   MLProof_Syntax.read_proof~thy b1 b2 s› reads in a proof term. The
  Boolean flags indicate the use of sort and type information. Usually, typing
  information is left implicit and is inferred during proof reconstruction.
  %FIXME eliminate flags!?

   MLProof_Syntax.pretty_proof~ctxt prf› pretty-prints the given proof
  term.
›

text %mlex  🗏‹~~/src/HOL/Proofs/ex/Proof_Terms.thy› provides basic examples involving
  proof terms.

   🗏‹~~/src/HOL/Proofs/ex/XML_Data.thy› demonstrates export and import of
  proof terms via XML/ML data representation.
›

end