Theory Generic

(*:maxLineLen=78:*)

theory Generic
imports Main Base
begin

chapter ‹Generic tools and packages \label{ch:gen-tools}›

section ‹Configuration options \label{sec:config}›

text ‹
  Isabelle/Pure maintains a record of named configuration options within the
  theory or proof context, with values of type ML_typebool, ML_typeint, ML_typereal, or ML_typestring. Tools may declare options in
  ML, and then refer to these values (relative to the context). Thus global
  reference variables are easily avoided. The user may change the value of a
  configuration option by means of an associated attribute of the same name.
  This form of context declaration works particularly well with commands such
  as @{command "declare"} or @{command "using"} like this:
›

(*<*)experiment begin(*>*)
declare [[show_main_goal = false]]

notepad
begin
  note [[show_main_goal = true]]
end
(*<*)end(*>*)

text ‹
  \begin{matharray}{rcll}
    @{command_def "print_options"} & : & context →› \\
  \end{matharray}

  rail@@{command print_options} ('!'?)
    ;
    @{syntax name} ('=' ('true' | 'false' | @{syntax int} | @{syntax float} | @{syntax name}))?

   @{command "print_options"} prints the available configuration options,
  with names, types, and current values; the ``!›'' option indicates extra
  verbosity.
  
   name = value› as an attribute expression modifies the named option, with
  the syntax of the value depending on the option's type. For ML_typebool
  the default value is true›. Any attempt to change a global option in a
  local context is ignored.
›


section ‹Basic proof tools›

subsection ‹Miscellaneous methods and attributes \label{sec:misc-meth-att}›

text ‹
  \begin{matharray}{rcl}
    @{method_def unfold} & : & method› \\
    @{method_def fold} & : & method› \\
    @{method_def insert} & : & method› \\[0.5ex]
    @{method_def erule}* & : & method› \\
    @{method_def drule}* & : & method› \\
    @{method_def frule}* & : & method› \\
    @{method_def intro} & : & method› \\
    @{method_def elim} & : & method› \\
    @{method_def fail} & : & method› \\
    @{method_def succeed} & : & method› \\
    @{method_def sleep} & : & method› \\
  \end{matharray}

  rail‹
    (@@{method fold} | @@{method unfold} | @@{method insert}) @{syntax thms}
    ;
    (@@{method erule} | @@{method drule} | @@{method frule})
      ('(' @{syntax nat} ')')? @{syntax thms}
    ;
    (@@{method intro} | @@{method elim}) @{syntax thms}?
    ;
    @@{method sleep} @{syntax real}

   @{method unfold}~a1 … an and @{method fold}~a1 … an expand (or
  fold back) the given definitions throughout all goals; any chained facts
  provided are inserted into the goal and subject to rewriting as well.

  Unfolding works in two stages: first, the given equations are used directly
  for rewriting; second, the equations are passed through the attribute
  @{attribute_ref abs_def} before rewriting --- to ensure that definitions are
  fully expanded, regardless of the actual parameters that are provided.

   @{method insert}~a1 … an inserts theorems as facts into all goals of
  the proof state. Note that current facts indicated for forward chaining are
  ignored.

   @{method erule}~a1 … an, @{method drule}~a1 … an, and @{method
  frule}~a1 … an are similar to the basic @{method rule} method (see
  \secref{sec:pure-meth-att}), but apply rules by elim-resolution,
  destruct-resolution, and forward-resolution, respectively cite"isabelle-implementation". The optional natural number argument (default 0)
  specifies additional assumption steps to be performed here.

  Note that these methods are improper ones, mainly serving for
  experimentation and tactic script emulation. Different modes of basic rule
  application are usually expressed in Isar at the proof language level,
  rather than via implicit proof state manipulations. For example, a proper
  single-step elimination would be done using the plain @{method rule} method,
  with forward chaining of current facts.

   @{method intro} and @{method elim} repeatedly refine some goal by intro-
  or elim-resolution, after having inserted any chained facts. Exactly the
  rules given as arguments are taken into account; this allows fine-tuned
  decomposition of a proof problem, in contrast to common automated tools.

   @{method fail} yields an empty result sequence; it is the identity of the
  ``|›'' method combinator (cf.\ \secref{sec:proof-meth}).

   @{method succeed} yields a single (unchanged) result; it is the identity
  of the ``,›'' method combinator (cf.\ \secref{sec:proof-meth}).

   @{method sleep}~s› succeeds after a real-time delay of s› seconds. This
  is occasionally useful for demonstration and testing purposes.


  \begin{matharray}{rcl}
    @{attribute_def tagged} & : & attribute› \\
    @{attribute_def untagged} & : & attribute› \\[0.5ex]
    @{attribute_def THEN} & : & attribute› \\
    @{attribute_def unfolded} & : & attribute› \\
    @{attribute_def folded} & : & attribute› \\
    @{attribute_def abs_def} & : & attribute› \\[0.5ex]
    @{attribute_def rotated} & : & attribute› \\
    @{attribute_def (Pure) elim_format} & : & attribute› \\
    @{attribute_def no_vars}* & : & attribute› \\
  \end{matharray}

  rail@@{attribute tagged} @{syntax name} @{syntax name}
    ;
    @@{attribute untagged} @{syntax name}
    ;
    @@{attribute THEN} ('[' @{syntax nat} ']')? @{syntax thm}
    ;
    (@@{attribute unfolded} | @@{attribute folded}) @{syntax thms}
    ;
    @@{attribute rotated} @{syntax int}?

   @{attribute tagged}~name value› and @{attribute untagged}~name› add and
  remove ‹tags› of some theorem. Tags may be any list of string pairs that
  serve as formal comment. The first string is considered the tag name, the
  second its value. Note that @{attribute untagged} removes any tags of the
  same name.

   @{attribute THEN}~a› composes rules by resolution; it resolves with the
  first premise of a› (an alternative position may be also specified). See
  also ML_infixRS in cite"isabelle-implementation".
  
   @{attribute unfolded}~a1 … an and @{attribute folded}~a1 … an
  expand and fold back again the given definitions throughout a rule.

   @{attribute abs_def} turns an equation of the form propf x y  t
  into propf  λx y. t, which ensures that @{method simp} steps always
  expand it. This also works for object-logic equality.

   @{attribute rotated}~n› rotate the premises of a theorem by n› (default
  1).

   @{attribute (Pure) elim_format} turns a destruction rule into elimination
  rule format, by resolving with the rule propPROP A  (PROP A  PROP B) 
  PROP B.
  
  Note that the Classical Reasoner (\secref{sec:classical}) provides its own
  version of this operation.

   @{attribute no_vars} replaces schematic variables by free ones; this is
  mainly for tuning output of pretty printed theorems.
›


subsection ‹Low-level equational reasoning›

text ‹
  \begin{matharray}{rcl}
    @{method_def subst} & : & method› \\
    @{method_def hypsubst} & : & method› \\
    @{method_def split} & : & method› \\
  \end{matharray}

  rail@@{method subst} ('(' 'asm' ')')?  ('(' (@{syntax nat}+) ')')? @{syntax thm}
    ;
    @@{method split} @{syntax thms}

  These methods provide low-level facilities for equational reasoning that are
  intended for specialized applications only. Normally, single step
  calculations would be performed in a structured text (see also
  \secref{sec:calculation}), while the Simplifier methods provide the
  canonical way for automated normalization (see \secref{sec:simplifier}).

   @{method subst}~eq› performs a single substitution step using rule eq›,
  which may be either a meta or object equality.

   @{method subst}~(asm) eq› substitutes in an assumption.

   @{method subst}~(i … j) eq› performs several substitutions in the
  conclusion. The numbers i› to j› indicate the positions to substitute at.
  Positions are ordered from the top of the term tree moving down from left to
  right. For example, in (a + b) + (c + d)› there are three positions where
  commutativity of +› is applicable: 1 refers to a + b›, 2 to the whole
  term, and 3 to c + d›.

  If the positions in the list (i … j)› are non-overlapping (e.g.\ (2 3)› in
  (a + b) + (c + d)›) you may assume all substitutions are performed
  simultaneously. Otherwise the behaviour of subst› is not specified.

   @{method subst}~(asm) (i … j) eq› performs the substitutions in the
  assumptions. The positions refer to the assumptions in order from left to
  right. For example, given in a goal of the form P (a + b) ⟹ P (c + d) ⟹ …›,
  position 1 of commutativity of +› is the subterm a + b› and position 2 is
  the subterm c + d›.

   @{method hypsubst} performs substitution using some assumption; this only
  works for equations of the form x = t› where x› is a free or bound
  variable.

   @{method split}~a1 … an performs single-step case splitting using the
  given rules. Splitting is performed in the conclusion or some assumption of
  the subgoal, depending of the structure of the rule.
  
  Note that the @{method simp} method already involves repeated application of
  split rules as declared in the current context, using @{attribute split},
  for example.
›


section ‹The Simplifier \label{sec:simplifier}›

text ‹
  The Simplifier performs conditional and unconditional rewriting and uses
  contextual information: rule declarations in the background theory or local
  proof context are taken into account, as well as chained facts and subgoal
  premises (``local assumptions''). There are several general hooks that allow
  to modify the simplification strategy, or incorporate other proof tools that
  solve sub-problems, produce rewrite rules on demand etc.

  The rewriting strategy is always strictly bottom up, except for congruence
  rules, which are applied while descending into a term. Conditions in
  conditional rewrite rules are solved recursively before the rewrite rule is
  applied.

  The default Simplifier setup of major object logics (HOL, HOLCF, FOL, ZF)
  makes the Simplifier ready for immediate use, without engaging into the
  internal structures. Thus it serves as general-purpose proof tool with the
  main focus on equational reasoning, and a bit more than that.
›


subsection ‹Simplification methods \label{sec:simp-meth}›

text ‹
  \begin{tabular}{rcll}
    @{method_def simp} & : & method› \\
    @{method_def simp_all} & : & method› \\
    Pure.›@{method_def (Pure) simp} & : & method› \\
    Pure.›@{method_def (Pure) simp_all} & : & method› \\
    @{attribute_def simp_depth_limit} & : & attribute› & default 100› \\
  \end{tabular}
  

  rail‹
    (@@{method simp} | @@{method simp_all}) opt? (@{syntax simpmod} * )
    ;

    opt: '(' ('no_asm' | 'no_asm_simp' | 'no_asm_use' | 'asm_lr' ) ')'
    ;
    @{syntax_def simpmod}: ('add' | 'del' | 'flip' | 'only' |
      'split' (() | '!' | 'del') | 'cong' (() | 'add' | 'del'))
    ':' @{syntax thms}

   @{method simp} invokes the Simplifier on the first subgoal, after
  inserting chained facts as additional goal premises; further rule
  declarations may be included via (simp add: facts)›. The proof method fails
  if the subgoal remains unchanged after simplification.

  Note that the original goal premises and chained facts are subject to
  simplification themselves, while declarations via add›/del› merely follow
  the policies of the object-logic to extract rewrite rules from theorems,
  without further simplification. This may lead to slightly different behavior
  in either case, which might be required precisely like that in some boundary
  situations to perform the intended simplification step!

  
Modifier flip› deletes the following theorems from the simpset and adds
their symmetric version (i.e.\ lhs and rhs exchanged). No warning is shown
if the original theorem was not present.

  
  The only› modifier first removes all other rewrite rules, looper tactics
  (including split rules), congruence rules, and then behaves like add›.
  Implicit solvers remain, which means that trivial rules like reflexivity or
  introduction of True› are available to solve the simplified subgoals, but
  also non-trivial tools like linear arithmetic in HOL. The latter may lead to
  some surprise of the meaning of ``only'' in Isabelle/HOL compared to
  English!

  
  The split› modifiers add or delete rules for the Splitter (see also
  \secref{sec:simp-strategies} on the looper). This works only if the
  Simplifier method has been properly setup to include the Splitter (all major
  object logics such HOL, HOLCF, FOL, ZF do this already).
  The !› option causes the split rules to be used aggressively:
  after each application of a split rule in the conclusion, the safe›
  tactic of the classical reasoner (see \secref{sec:classical:partial})
  is applied to the new goal. The net effect is that the goal is split into
  the different cases. This option can speed up simplification of goals
  with many nested conditional or case expressions significantly.

  There is also a separate @{method_ref split} method available for
  single-step case splitting. The effect of repeatedly applying (split thms)›
  can be imitated by ``(simp only: split: thms)›''.

  
  The cong› modifiers add or delete Simplifier congruence rules (see also
  \secref{sec:simp-rules}); the default is to add.

   @{method simp_all} is similar to @{method simp}, but acts on all goals,
  working backwards from the last to the first one as usual in Isabelle.‹The
  order is irrelevant for goals without schematic variables, so simplification
  might actually be performed in parallel here.›

  Chained facts are inserted into all subgoals, before the simplification
  process starts. Further rule declarations are the same as for @{method
  simp}.

  The proof method fails if all subgoals remain unchanged after
  simplification.

   @{attribute simp_depth_limit} limits the number of recursive invocations
  of the Simplifier during conditional rewriting.


  By default the Simplifier methods above take local assumptions fully into
  account, using equational assumptions in the subsequent normalization
  process, or simplifying assumptions themselves. Further options allow to
  fine-tune the behavior of the Simplifier in this respect, corresponding to a
  variety of ML tactics as follows.‹Unlike the corresponding Isar proof
  methods, the ML tactics do not insist in changing the goal state.›

  \begin{center}
  \small
  \begin{tabular}{|l|l|p{0.3\textwidth}|}
  \hline
  Isar method & ML tactic & behavior \\\hline

  (simp (no_asm))› & MLsimp_tac & assumptions are ignored completely
  \\\hline

  (simp (no_asm_simp))› & MLasm_simp_tac & assumptions are used in the
  simplification of the conclusion but are not themselves simplified \\\hline

  (simp (no_asm_use))› & MLfull_simp_tac & assumptions are simplified but
  are not used in the simplification of each other or the conclusion \\\hline

  (simp)› & MLasm_full_simp_tac & assumptions are used in the
  simplification of the conclusion and to simplify other assumptions \\\hline

  (simp (asm_lr))› & MLasm_lr_simp_tac & compatibility mode: an
  assumption is only used for simplifying assumptions which are to the right
  of it \\\hline

  \end{tabular}
  \end{center}

  
  In Isabelle/Pure, proof methods @{method (Pure) simp} and @{method (Pure)
  simp_all} only know about meta-equality ≡›. Any new object-logic needs to
  re-define these methods via MLSimplifier.method_setup in ML:
  Isabelle/FOL or Isabelle/HOL may serve as blue-prints.
›


subsubsection ‹Examples›

text ‹
  We consider basic algebraic simplifications in Isabelle/HOL. The rather
  trivial goal prop0 + (x + 0) = x + 0 + 0 looks like a good candidate
  to be solved by a single call of @{method simp}:
›

lemma "0 + (x + 0) = x + 0 + 0" apply simp? oops

text ‹
  The above attempt ‹fails›, because term0 and term(+) in the
  HOL library are declared as generic type class operations, without stating
  any algebraic laws yet. More specific types are required to get access to
  certain standard simplifications of the theory context, e.g.\ like this:›

lemma fixes x :: nat shows "0 + (x + 0) = x + 0 + 0" by simp
lemma fixes x :: int shows "0 + (x + 0) = x + 0 + 0" by simp
lemma fixes x :: "'a :: monoid_add" shows "0 + (x + 0) = x + 0 + 0" by simp

text 
  In many cases, assumptions of a subgoal are also needed in the
  simplification process. For example:
›

lemma fixes x :: nat shows "x = 0  x + x = 0" by simp
lemma fixes x :: nat assumes "x = 0" shows "x + x = 0" apply simp oops
lemma fixes x :: nat assumes "x = 0" shows "x + x = 0" using assms by simp

text ‹
  As seen above, local assumptions that shall contribute to simplification
  need to be part of the subgoal already, or indicated explicitly for use by
  the subsequent method invocation. Both too little or too much information
  can make simplification fail, for different reasons.

  In the next example the malicious assumption propx::nat. f x = g (f (g
  x)) does not contribute to solve the problem, but makes the default
  @{method simp} method loop: the rewrite rule f ?x ≡ g (f (g ?x))› extracted
  from the assumption does not terminate. The Simplifier notices certain
  simple forms of nontermination, but not this one. The problem can be solved
  nonetheless, by ignoring assumptions via special options as explained
  before:
›

lemma "(x::nat. f x = g (f (g x)))  f 0 = f 0 + 0"
  by (simp (no_asm))

text ‹
  The latter form is typical for long unstructured proof scripts, where the
  control over the goal content is limited. In structured proofs it is usually
  better to avoid pushing too many facts into the goal state in the first
  place. Assumptions in the Isar proof context do not intrude the reasoning if
  not used explicitly. This is illustrated for a toplevel statement and a
  local proof body as follows:
›

lemma
  assumes "x::nat. f x = g (f (g x))"
  shows "f 0 = f 0 + 0" by simp

notepad
begin
  assume "x::nat. f x = g (f (g x))"
  have "f 0 = f 0 + 0" by simp
end

text 
  Because assumptions may simplify each other, there can be very subtle cases
  of nontermination. For example, the regular @{method simp} method applied to
  propP (f x)  y = x  f x = f y  Q gives rise to the infinite
  reduction sequence
  \[
  P (f x)› \stackrel{f x ≡ f y›}{\longmapsto}
  P (f y)› \stackrel{y ≡ x›}{\longmapsto}
  P (f x)› \stackrel{f x ≡ f y›}{\longmapsto} \cdots
  \]
  whereas applying the same to propy = x  f x = f y  P (f x)  Q
  terminates (without solving the goal):
›

lemma "y = x  f x = f y  P (f x)  Q"
  apply simp
  oops

text ‹
  See also \secref{sec:simp-trace} for options to enable Simplifier trace
  mode, which often helps to diagnose problems with rewrite systems.
›


subsection ‹Declaring rules \label{sec:simp-rules}›

text ‹
  \begin{matharray}{rcl}
    @{attribute_def simp} & : & attribute› \\
    @{attribute_def split} & : & attribute› \\
    @{attribute_def cong} & : & attribute› \\
    @{command_def "print_simpset"}* & : & context →› \\
  \end{matharray}

  rail‹
    (@@{attribute simp} | @@{attribute cong}) (() | 'add' | 'del') |
    @@{attribute split} (() | '!' | 'del')
    ;
    @@{command print_simpset} ('!'?)
  ›

   @{attribute simp} declares rewrite rules, by adding or deleting them from
  the simpset within the theory or proof context. Rewrite rules are theorems
  expressing some form of equality, for example:

  Suc ?m + ?n = ?m + Suc ?n› \\
  ?P ∧ ?P ⟷ ?P› \\
  ?A ∪ ?B ≡ {x. x ∈ ?A ∨ x ∈ ?B}›

  
  Conditional rewrites such as ?m < ?n ⟹ ?m div ?n = 0› are also permitted;
  the conditions can be arbitrary formulas.

  
  Internally, all rewrite rules are translated into Pure equalities, theorems
  with conclusion lhs ≡ rhs›. The simpset contains a function for extracting
  equalities from arbitrary theorems, which is usually installed when the
  object-logic is configured initially. For example, ¬ ?x ∈ {}› could be
  turned into ?x ∈ {} ≡ False›. Theorems that are declared as @{attribute
  simp} and local assumptions within a goal are treated uniformly in this
  respect.

  The Simplifier accepts the following formats for the lhs› term:

     First-order patterns, considering the sublanguage of application of
    constant operators to variable operands, without λ›-abstractions or
    functional variables. For example:

    (?x + ?y) + ?z ≡ ?x + (?y + ?z)› \\
    f (f ?x ?y) ?z ≡ f ?x (f ?y ?z)›

     Higher-order patterns in the sense of cite"nipkow-patterns". These
    are terms in β›-normal form (this will always be the case unless you have
    done something strange) where each occurrence of an unknown is of the form
    ?F x1 … xn, where the xi are distinct bound variables.

    For example, (∀x. ?P x ∧ ?Q x) ≡ (∀x. ?P x) ∧ (∀x. ?Q x)› or its
    symmetric form, since the rhs› is also a higher-order pattern.

     Physical first-order patterns over raw λ›-term structure without
    αβη›-equality; abstractions and bound variables are treated like
    quasi-constant term material.

    For example, the rule ?f ?x ∈ range ?f = True› rewrites the term g a ∈
    range g› to True›, but will fail to match g (h b) ∈ range (λx. g (h
    x))›. However, offending subterms (in our case ?f ?x›, which is not a
    pattern) can be replaced by adding new variables and conditions like this:
    ?y = ?f ?x ⟹ ?y ∈ range ?f = True› is acceptable as a conditional rewrite
    rule of the second category since conditions can be arbitrary terms.

   @{attribute split} declares case split rules.

   @{attribute cong} declares congruence rules to the Simplifier context.

  Congruence rules are equalities of the form @{text [display]
  "… ⟹ f ?x1 … ?xn = f ?y1 … ?yn"}

  This controls the simplification of the arguments of f›. For example, some
  arguments can be simplified under additional assumptions:
  @{text [display]
    "?P1 ⟷ ?Q1 ⟹
    (?Q1 ⟹ ?P2 ⟷ ?Q2) ⟹
    (?P1 ⟶ ?P2) ⟷ (?Q1 ⟶ ?Q2)"}

  Given this rule, the Simplifier assumes ?Q1 and extracts rewrite rules
  from it when simplifying ?P2. Such local assumptions are effective for
  rewriting formulae such as x = 0 ⟶ y + x = y›.

  %FIXME
  %The local assumptions are also provided as theorems to the solver;
  %see \secref{sec:simp-solver} below.

  
  The following congruence rule for bounded quantifiers also supplies
  contextual information --- about the bound variable: @{text [display]
  "(?A = ?B) ⟹
    (⋀x. x ∈ ?B ⟹ ?P x ⟷ ?Q x) ⟹
    (∀x ∈ ?A. ?P x) ⟷ (∀x ∈ ?B. ?Q x)"}

  
  This congruence rule for conditional expressions can supply contextual
  information for simplifying the arms: @{text [display]
  "?p = ?q ⟹
    (?q ⟹ ?a = ?c) ⟹
    (¬ ?q ⟹ ?b = ?d) ⟹
    (if ?p then ?a else ?b) = (if ?q then ?c else ?d)"}

  A congruence rule can also ‹prevent› simplification of some arguments. Here
  is an alternative congruence rule for conditional expressions that conforms
  to non-strict functional evaluation: @{text [display]
  "?p = ?q ⟹
    (if ?p then ?a else ?b) = (if ?q then ?a else ?b)"}

  Only the first argument is simplified; the others remain unchanged. This can
  make simplification much faster, but may require an extra case split over
  the condition ?q› to prove the goal.

   @{command "print_simpset"} prints the collection of rules declared to the
  Simplifier, which is also known as ``simpset'' internally; the ``!›''
  option indicates extra verbosity.

  The implicit simpset of the theory context is propagated monotonically
  through the theory hierarchy: forming a new theory, the union of the
  simpsets of its imports are taken as starting point. Also note that
  definitional packages like @{command "datatype"}, @{command "primrec"},
  @{command "fun"} routinely declare Simplifier rules to the target context,
  while plain @{command "definition"} is an exception in ‹not› declaring
  anything.

  
  It is up the user to manipulate the current simpset further by explicitly
  adding or deleting theorems as simplification rules, or installing other
  tools via simplification procedures (\secref{sec:simproc}). Good simpsets
  are hard to design. Rules that obviously simplify, like ?n + 0 ≡ ?n› are
  good candidates for the implicit simpset, unless a special non-normalizing
  behavior of certain operations is intended. More specific rules (such as
  distributive laws, which duplicate subterms) should be added only for
  specific proof steps. Conversely, sometimes a rule needs to be deleted just
  for some part of a proof. The need of frequent additions or deletions may
  indicate a poorly designed simpset.

  \begin{warn}
  The union of simpsets from theory imports (as described above) is not always
  a good starting point for the new theory. If some ancestors have deleted
  simplification rules because they are no longer wanted, while others have
  left those rules in, then the union will contain the unwanted rules, and
  thus have to be deleted again in the theory body.
  \end{warn}
›


subsection ‹Ordered rewriting with permutative rules›

text ‹
  A rewrite rule is ‹permutative› if the left-hand side and right-hand side
  are the equal up to renaming of variables. The most common permutative rule
  is commutativity: ?x + ?y = ?y + ?x›. Other examples include (?x - ?y) -
  ?z = (?x - ?z) - ?y› in arithmetic and insert ?x (insert ?y ?A) = insert ?y
  (insert ?x ?A)› for sets. Such rules are common enough to merit special
  attention.

  Because ordinary rewriting loops given such rules, the Simplifier employs a
  special strategy, called ‹ordered rewriting›. Permutative rules are
  detected and only applied if the rewriting step decreases the redex wrt.\ a
  given term ordering. For example, commutativity rewrites b + a› to a + b›,
  but then stops, because the redex cannot be decreased further in the sense
  of the term ordering.

  The default is lexicographic ordering of term structure, but this could be
  also changed locally for special applications via @{define_ML
  Simplifier.set_term_ord} in Isabelle/ML.

  
  Permutative rewrite rules are declared to the Simplifier just like other
  rewrite rules. Their special status is recognized automatically, and their
  application is guarded by the term ordering accordingly.
›


subsubsection ‹Rewriting with AC operators›

text ‹
  Ordered rewriting is particularly effective in the case of
  associative-commutative operators. (Associativity by itself is not
  permutative.) When dealing with an AC-operator f›, keep the following
  points in mind:

     The associative law must always be oriented from left to right, namely
    f (f x y) z = f x (f y z)›. The opposite orientation, if used with
    commutativity, leads to looping in conjunction with the standard term
    order.

     To complete your set of rewrite rules, you must add not just
    associativity (A) and commutativity (C) but also a derived rule
    ‹left-commutativity› (LC): f x (f y z) = f y (f x z)›.

  Ordered rewriting with the combination of A, C, and LC sorts a term
  lexicographically --- the rewriting engine imitates bubble-sort.
›

experiment
  fixes f :: "'a  'a  'a"  (infix "" 60)
  assumes assoc: "(x  y)  z = x  (y  z)"
  assumes commute: "x  y = y  x"
begin

lemma left_commute: "x  (y  z) = y  (x  z)"
proof -
  have "(x  y)  z = (y  x)  z" by (simp only: commute)
  then show ?thesis by (simp only: assoc)
qed

lemmas AC_rules = assoc commute left_commute

text ‹
  Thus the Simplifier is able to establish equalities with arbitrary
  permutations of subterms, by normalizing to a common standard form. For
  example:
›

lemma "(b  c)  a = xxx"
  apply (simp only: AC_rules)
  txt subgoals
  oops

lemma "(b  c)  a = a  (b  c)" by (simp only: AC_rules)
lemma "(b  c)  a = c  (b  a)" by (simp only: AC_rules)
lemma "(b  c)  a = (c  b)  a" by (simp only: AC_rules)

end

text ‹
  Martin and Nipkow cite"martin-nipkow" discuss the theory and give many
  examples; other algebraic structures are amenable to ordered rewriting, such
  as Boolean rings. The Boyer-Moore theorem prover citebm88book also
  employs ordered rewriting.
›


subsubsection ‹Re-orienting equalities›

text ‹Another application of ordered rewriting uses the derived rule
  @{thm [source] eq_commute}: @{thm [source = false] eq_commute} to
  reverse equations.

  This is occasionally useful to re-orient local assumptions according
  to the term ordering, when other built-in mechanisms of
  reorientation and mutual simplification fail to apply.›


subsection ‹Simplifier tracing and debugging \label{sec:simp-trace}›

text ‹
  \begin{tabular}{rcll}
    @{attribute_def simp_trace} & : & attribute› & default false› \\
    @{attribute_def simp_trace_depth_limit} & : & attribute› & default 1› \\
    @{attribute_def simp_debug} & : & attribute› & default false› \\
    @{attribute_def simp_trace_new} & : & attribute› \\
    @{attribute_def simp_break} & : & attribute› \\
  \end{tabular}
  

  rail@@{attribute simp_trace_new} ('interactive')? 
      ('mode' '=' ('full' | 'normal'))? 
      ('depth' '=' @{syntax nat})?
    ;

    @@{attribute simp_break} (@{syntax term}*)
  ›

  These attributes and configurations options control various aspects of
  Simplifier tracing and debugging.

   @{attribute simp_trace} makes the Simplifier output internal operations.
  This includes rewrite steps (but not traces from simproc calls),
  but also bookkeeping like modifications of the simpset.

   @{attribute simp_trace_depth_limit} limits the effect of @{attribute
  simp_trace} to the given depth of recursive Simplifier invocations (when
  solving conditions of rewrite rules).

   @{attribute simp_debug} makes the Simplifier output some extra information
  about internal operations. This includes any attempted invocation of
  simplification procedures and the corresponding traces.

   @{attribute simp_trace_new} controls Simplifier tracing within
  Isabelle/PIDE applications, notably Isabelle/jEdit cite"isabelle-jedit".
  This provides a hierarchical representation of the rewriting steps performed
  by the Simplifier.

  Users can configure the behaviour by specifying breakpoints, verbosity and
  enabling or disabling the interactive mode. In normal verbosity (the
  default), only rule applications matching a breakpoint will be shown to the
  user. In full verbosity, all rule applications will be logged. Interactive
  mode interrupts the normal flow of the Simplifier and defers the decision
  how to continue to the user via some GUI dialog.

   @{attribute simp_break} declares term or theorem breakpoints for
  @{attribute simp_trace_new} as described above. Term breakpoints are
  patterns which are checked for matches on the redex of a rule application.
  Theorem breakpoints trigger when the corresponding theorem is applied in a
  rewrite step. For example:
›

(*<*)experiment begin(*>*)
declare conjI [simp_break]
declare [[simp_break "?x  ?y"]]
(*<*)end(*>*)


subsection ‹Simplification procedures \label{sec:simproc}›

text ‹
  A ‹simplification procedure› or ‹simproc› is an ML function that produces
  proven rewrite rules on demand. Simprocs are guarded by multiple ‹patterns›
  for the left-hand sides of equations. The Simplifier first matches the
  current redex against one of the LHS patterns; if this succeeds, the
  corresponding ML function is invoked, passing the Simplifier context and
  redex term. The function may choose to succeed with a specific result for
  the redex, or fail.

  The successful result of a simproc needs to be a (possibly conditional)
  rewrite rule t ≡ u› that is applicable to the current redex. The rule will
  be applied just as any ordinary rewrite rule. It is expected to be already
  in ‹internal form› of the Pure logic, bypassing the automatic preprocessing
  of object-level equivalences.

  \begin{matharray}{rcl}
    @{command_def "simproc_setup"} & : & local_theory → local_theory› \\
    @{ML_antiquotation_def "simproc_setup"} & : & ML antiquotation› \\
    simproc & : & attribute› \\
  \end{matharray}

  rail@{syntax_def simproc_setup}: (@'passive')? @{syntax name}
      patterns '=' @{syntax embedded}
    ;
    @{syntax_def simproc_setup_id}:
      @{syntax simproc_setup} (@'identifier' @{syntax thms})?
    ;
    patterns: '(' (@{syntax term} + '|') ')'
    ;
    @@{command simproc_setup} @{syntax simproc_setup}
    ;
    @@{attribute simproc} (('add' ':')? | 'del' ':') (@{syntax name}+)
  ›

   Command @{command "simproc_setup"} defines a named simplification
  procedure that is invoked by the Simplifier whenever any of the given term
  patterns match the current redex. The implementation, which is provided as
  embedded ML source, needs to be of type
  ML_typemorphism -> Proof.context -> cterm -> thm option,
  where the ML_typecterm represents the current redex r› and the result
  is supposed to be MLSOME proven rewrite rule r ≡ r'› (or a
  generalized version); MLNONE indicates failure. The
  ML_typeProof.context argument holds the full context of the current
  Simplifier invocation.

  The ML_typemorphism tells how to move from the abstract context of the
  original definition into the concrete context of applications. This is only
  relevant for simprocs that are defined ``in'' a local theory context
  (e.g.\ @{command "locale"} with later @{command "interpretation"}).

  By default, the simproc is declared to the current Simplifier context and
  thus ‹active›. The keyword passive avoids that: it merely defines a
  simproc that can be activated in a different context later on.

   ML antiquotation @{ML_antiquotation_ref simproc_setup} is like command
  @{command simproc_setup}, with slightly extended syntax following @{syntax
  simproc_setup_id}. It allows to introduce a new simproc conveniently within
  an ML module, and refer directly to its ML value. For example, see various
  uses in @{file "~~/src/HOL/Tools/record.ML"}.

  The optional identifier specifies characteristic theorems to distinguish
  simproc instances after application of morphisms, e.g.\ @{command locale}
  with multiple @{command interpretation}. See also the minimal example below.

   Attributes [simproc add: name]› and [simproc del: name]› add or delete
  named simprocs to the current Simplifier context. The default is to add a
  simproc. Note that @{command "simproc_setup"} already adds the new simproc
  by default, unless it specified as passive.
›


subsubsection ‹Examples›

text ‹
  The following simplification procedure for @{thm [source = false,
  show_types] unit_eq} in HOL performs fine-grained control over rule
  application, beyond higher-order pattern matching. Declaring @{thm unit_eq}
  as @{attribute simp} directly would make the Simplifier loop! Note that a
  version of this simplification procedure is already active in Isabelle/HOL.
›

(*<*)experiment begin(*>*)
simproc_setup unit ("x::unit") =
  K (K (fn ct =>
    if HOLogic.is_unit (Thm.term_of ct) then NONE
    else SOME (mk_meta_eq @{thm unit_eq})))
(*<*)end(*>*)

text ‹
  Since the Simplifier applies simplification procedures frequently, it is
  important to make the failure check in ML reasonably fast.

   The subsequent example shows how to define a local simproc with extra
  identifier to distinguish its instances after interpretation:
›

locale loc = fixes x y :: 'a assumes eq: "x = y"
begin

ML simproc_setupproc (x) =
    fn phi => fn _ => fn _ => SOME (Morphism.thm phi @{thm eq})
    identifier loc_axioms

end


subsection ‹Configurable Simplifier strategies \label{sec:simp-strategies}›

text ‹
  The core term-rewriting engine of the Simplifier is normally used in
  combination with some add-on components that modify the strategy and allow
  to integrate other non-Simplifier proof tools. These may be reconfigured in
  ML as explained below. Even if the default strategies of object-logics like
  Isabelle/HOL are used unchanged, it helps to understand how the standard
  Simplifier strategies work.›


subsubsection ‹The subgoaler›

text ‹
  \begin{mldecls}
  @{define_ML Simplifier.set_subgoaler: "(Proof.context -> int -> tactic) ->
  Proof.context -> Proof.context"} \\
  @{define_ML Simplifier.prems_of: "Proof.context -> thm list"} \\
  \end{mldecls}

  The subgoaler is the tactic used to solve subgoals arising out of
  conditional rewrite rules or congruence rules. The default should be
  simplification itself. In rare situations, this strategy may need to be
  changed. For example, if the premise of a conditional rule is an instance of
  its conclusion, as in Suc ?m < ?n ⟹ ?m < ?n›, the default strategy could
  loop. % FIXME !??

     MLSimplifier.set_subgoaler~tac ctxt› sets the subgoaler of the
    context to tac›. The tactic will be applied to the context of the running
    Simplifier instance.

     MLSimplifier.prems_of~ctxt› retrieves the current set of premises
    from the context. This may be non-empty only if the Simplifier has been
    told to utilize local assumptions in the first place (cf.\ the options in
    \secref{sec:simp-meth}).

  As an example, consider the following alternative subgoaler:
›

ML_val fun subgoaler_tac ctxt =
    assume_tac ctxt ORELSE'
    resolve_tac ctxt (Simplifier.prems_of ctxt) ORELSE'
    asm_simp_tac ctxt

text ‹
  This tactic first tries to solve the subgoal by assumption or by resolving
  with with one of the premises, calling simplification only if that fails.›


subsubsection ‹The solver›

text ‹
  \begin{mldecls}
  @{define_ML_type solver} \\
  @{define_ML Simplifier.mk_solver: "string ->
  (Proof.context -> int -> tactic) -> solver"} \\
  @{define_ML_infix setSolver: "Proof.context * solver -> Proof.context"} \\
  @{define_ML_infix addSolver: "Proof.context * solver -> Proof.context"} \\
  @{define_ML_infix setSSolver: "Proof.context * solver -> Proof.context"} \\
  @{define_ML_infix addSSolver: "Proof.context * solver -> Proof.context"} \\
  \end{mldecls}

  A solver is a tactic that attempts to solve a subgoal after simplification.
  Its core functionality is to prove trivial subgoals such as propTrue
  and t = t›, but object-logics might be more ambitious. For example,
  Isabelle/HOL performs a restricted version of linear arithmetic here.

  Solvers are packaged up in abstract type ML_typesolver, with MLSimplifier.mk_solver as the only operation to create a solver.

  
  Rewriting does not instantiate unknowns. For example, rewriting alone cannot
  prove a ∈ ?A› since this requires instantiating ?A›. The solver, however,
  is an arbitrary tactic and may instantiate unknowns as it pleases. This is
  the only way the Simplifier can handle a conditional rewrite rule whose
  condition contains extra variables. When a simplification tactic is to be
  combined with other provers, especially with the Classical Reasoner, it is
  important whether it can be considered safe or not. For this reason a
  simpset contains two solvers: safe and unsafe.

  The standard simplification strategy solely uses the unsafe solver, which is
  appropriate in most cases. For special applications where the simplification
  process is not allowed to instantiate unknowns within the goal,
  simplification starts with the safe solver, but may still apply the ordinary
  unsafe one in nested simplifications for conditional rules or congruences.
  Note that in this way the overall tactic is not totally safe: it may
  instantiate unknowns that appear also in other subgoals.

   MLSimplifier.mk_solver~name tac› turns tac› into a solver; the
  name› is only attached as a comment and has no further significance.

   ctxt setSSolver solver› installs solver› as the safe solver of ctxt›.

   ctxt addSSolver solver› adds solver› as an additional safe solver; it
  will be tried after the solvers which had already been present in ctxt›.

   ctxt setSolver solver› installs solver› as the unsafe solver of ctxt›.

   ctxt addSolver solver› adds solver› as an additional unsafe solver; it
  will be tried after the solvers which had already been present in ctxt›.


  
  The solver tactic is invoked with the context of the running Simplifier.
  Further operations may be used to retrieve relevant information, such as the
  list of local Simplifier premises via MLSimplifier.prems_of --- this
  list may be non-empty only if the Simplifier runs in a mode that utilizes
  local assumptions (see also \secref{sec:simp-meth}). The solver is also
  presented the full goal including its assumptions in any case. Thus it can
  use these (e.g.\ by calling MLassume_tac), even if the Simplifier proper
  happens to ignore local premises at the moment.

  
  As explained before, the subgoaler is also used to solve the premises of
  congruence rules. These are usually of the form s = ?x›, where s› needs to
  be simplified and ?x› needs to be instantiated with the result. Typically,
  the subgoaler will invoke the Simplifier at some point, which will
  eventually call the solver. For this reason, solver tactics must be prepared
  to solve goals of the form t = ?x›, usually by reflexivity. In particular,
  reflexivity should be tried before any of the fancy automated proof tools.

  It may even happen that due to simplification the subgoal is no longer an
  equality. For example, False ⟷ ?Q› could be rewritten to ¬ ?Q›. To cover
  this case, the solver could try resolving with the theorem ¬ False› of the
  object-logic.

  
  \begin{warn}
  If a premise of a congruence rule cannot be proved, then the congruence is
  ignored. This should only happen if the rule is ‹conditional› --- that is,
  contains premises not of the form t = ?x›. Otherwise it indicates that some
  congruence rule, or possibly the subgoaler or solver, is faulty.
  \end{warn}
›


subsubsection ‹The looper›

text ‹
  \begin{mldecls}
  @{define_ML_infix setloop: "Proof.context *
  (Proof.context -> int -> tactic) -> Proof.context"} \\
  @{define_ML_infix addloop: "Proof.context *
  (string * (Proof.context -> int -> tactic))
  -> Proof.context"} \\
  @{define_ML_infix delloop: "Proof.context * string -> Proof.context"} \\
  @{define_ML Splitter.add_split: "thm -> Proof.context -> Proof.context"} \\
  @{define_ML Splitter.add_split: "thm -> Proof.context -> Proof.context"} \\
  @{define_ML Splitter.add_split_bang: "
  thm -> Proof.context -> Proof.context"} \\
  @{define_ML Splitter.del_split: "thm -> Proof.context -> Proof.context"} \\
  \end{mldecls}

  The looper is a list of tactics that are applied after simplification, in
  case the solver failed to solve the simplified goal. If the looper succeeds,
  the simplification process is started all over again. Each of the subgoals
  generated by the looper is attacked in turn, in reverse order.

  A typical looper is ‹case splitting›: the expansion of a conditional.
  Another possibility is to apply an elimination rule on the assumptions. More
  adventurous loopers could start an induction.

     ctxt setloop tac› installs tac› as the only looper tactic of ctxt›.

     ctxt addloop (name, tac)› adds tac› as an additional looper tactic
    with name name›, which is significant for managing the collection of
    loopers. The tactic will be tried after the looper tactics that had
    already been present in ctxt›.

     ctxt delloop name› deletes the looper tactic that was associated with
    name› from ctxt›.

     MLSplitter.add_split~thm ctxt› adds split tactic
    for thm› as additional looper tactic of ctxt›
    (overwriting previous split tactic for the same constant).

     MLSplitter.add_split_bang~thm ctxt› adds aggressive
    (see \S\ref{sec:simp-meth})
    split tactic for thm› as additional looper tactic of ctxt›
    (overwriting previous split tactic for the same constant).

     MLSplitter.del_split~thm ctxt› deletes the split tactic
    corresponding to thm› from the looper tactics of ctxt›.

  The splitter replaces applications of a given function; the right-hand side
  of the replacement can be anything. For example, here is a splitting rule
  for conditional expressions:

  @{text [display] "?P (if ?Q ?x ?y) ⟷ (?Q ⟶ ?P ?x) ∧ (¬ ?Q ⟶ ?P ?y)"}

  Another example is the elimination operator for Cartesian products (which
  happens to be called constcase_prod in Isabelle/HOL:

  @{text [display] "?P (case_prod ?f ?p) ⟷ (∀a b. ?p = (a, b) ⟶ ?P (f a b))"}

  For technical reasons, there is a distinction between case splitting in the
  conclusion and in the premises of a subgoal. The former is done by MLSplitter.split_tac with rules like @{thm [source] if_split} or @{thm
  [source] option.split}, which do not split the subgoal, while the latter is
  done by MLSplitter.split_asm_tac with rules like @{thm [source]
  if_split_asm} or @{thm [source] option.split_asm}, which split the subgoal.
  The function MLSplitter.add_split automatically takes care of which
  tactic to call, analyzing the form of the rules given as argument; it is the
  same operation behind split› attribute or method modifier syntax in the
  Isar source language.

  Case splits should be allowed only when necessary; they are expensive and
  hard to control. Case-splitting on if-expressions in the conclusion is
  usually beneficial, so it is enabled by default in Isabelle/HOL and
  Isabelle/FOL/ZF.

  \begin{warn}
  With MLSplitter.split_asm_tac as looper component, the Simplifier may
  split subgoals! This might cause unexpected problems in tactic expressions
  that silently assume 0 or 1 subgoals after simplification.
  \end{warn}
›


subsection ‹Forward simplification \label{sec:simp-forward}›

text ‹
  \begin{matharray}{rcl}
    @{attribute_def simplified} & : & attribute› \\
  \end{matharray}

  rail@@{attribute simplified} opt? @{syntax thms}?
    ;

    opt: '(' ('no_asm' | 'no_asm_simp' | 'no_asm_use') ')'

   @{attribute simplified}~a1 … an causes a theorem to be simplified,
  either by exactly the specified rules a1, …, an, or the implicit
  Simplifier context if no arguments are given. The result is fully simplified
  by default, including assumptions and conclusion; the options no_asm› etc.\
  tune the Simplifier in the same way as the for the simp› method.

  Note that forward simplification restricts the Simplifier to its most basic
  operation of term rewriting; solver and looper tactics
  (\secref{sec:simp-strategies}) are ‹not› involved here. The @{attribute
  simplified} attribute should be only rarely required under normal
  circumstances.
›


section ‹The Classical Reasoner \label{sec:classical}›

subsection ‹Basic concepts›

text ‹Although Isabelle is generic, many users will be working in
  some extension of classical first-order logic.  Isabelle/ZF is built
  upon theory FOL, while Isabelle/HOL conceptually contains
  first-order logic as a fragment.  Theorem-proving in predicate logic
  is undecidable, but many automated strategies have been developed to
  assist in this task.

  Isabelle's classical reasoner is a generic package that accepts
  certain information about a logic and delivers a suite of automatic
  proof tools, based on rules that are classified and declared in the
  context.  These proof procedures are slow and simplistic compared
  with high-end automated theorem provers, but they can save
  considerable time and effort in practice.  They can prove theorems
  such as Pelletier's citepelletier86 problems 40 and 41 in a few
  milliseconds (including full proof reconstruction):›

lemma "(y. x. F x y  F x x)  ¬ (x. y. z. F z y  ¬ F z x)"
  by blast

lemma "(z. y. x. f x y  f x z  ¬ f x x)  ¬ (z. x. f x z)"
  by blast

text ‹The proof tools are generic.  They are not restricted to
  first-order logic, and have been heavily used in the development of
  the Isabelle/HOL library and applications.  The tactics can be
  traced, and their components can be called directly; in this manner,
  any proof can be viewed interactively.›


subsubsection ‹The sequent calculus›

text ‹Isabelle supports natural deduction, which is easy to use for
  interactive proof.  But natural deduction does not easily lend
  itself to automation, and has a bias towards intuitionism.  For
  certain proofs in classical logic, it can not be called natural.
  The ‹sequent calculus›, a generalization of natural deduction,
  is easier to automate.

  A ‹sequent› has the form Γ ⊢ Δ›, where Γ›
  and Δ› are sets of formulae.‹For first-order
  logic, sequents can equivalently be made from lists or multisets of
  formulae.› The sequent P1, …, Pm ⊢ Q1, …, Qn is
  ‹valid› if P1 ∧ … ∧ Pm implies Q1 ∨ … ∨
  Qn.  Thus P1, …, Pm represent assumptions, each of which
  is true, while Q1, …, Qn represent alternative goals.  A
  sequent is ‹basic› if its left and right sides have a common
  formula, as in P, Q ⊢ Q, R›; basic sequents are trivially
  valid.

  Sequent rules are classified as ‹right› or ‹left›,
  indicating which side of the ⊢› symbol they operate on.
  Rules that operate on the right side are analogous to natural
  deduction's introduction rules, and left rules are analogous to
  elimination rules.  The sequent calculus analogue of (⟶I)›
  is the rule
  \[
  \infer[(⟶R)›]{Γ ⊢ Δ, P ⟶ Q›}{P, Γ ⊢ Δ, Q›}
  \]
  Applying the rule backwards, this breaks down some implication on
  the right side of a sequent; Γ› and Δ› stand for
  the sets of formulae that are unaffected by the inference.  The
  analogue of the pair (∨I1)› and (∨I2)› is the
  single rule
  \[
  \infer[(∨R)›]{Γ ⊢ Δ, P ∨ Q›}{Γ ⊢ Δ, P, Q›}
  \]
  This breaks down some disjunction on the right side, replacing it by
  both disjuncts.  Thus, the sequent calculus is a kind of
  multiple-conclusion logic.

  To illustrate the use of multiple formulae on the right, let us
  prove the classical theorem (P ⟶ Q) ∨ (Q ⟶ P)›.  Working
  backwards, we reduce this formula to a basic sequent:
  \[
  \infer[(∨R)›]{⊢ (P ⟶ Q) ∨ (Q ⟶ P)›}
    {\infer[(⟶R)›]{⊢ (P ⟶ Q), (Q ⟶ P)›}
      {\infer[(⟶R)›]{P ⊢ Q, (Q ⟶ P)›}
        {P, Q ⊢ Q, P›}}}
  \]

  This example is typical of the sequent calculus: start with the
  desired theorem and apply rules backwards in a fairly arbitrary
  manner.  This yields a surprisingly effective proof procedure.
  Quantifiers add only few complications, since Isabelle handles
  parameters and schematic variables.  See cite‹Chapter 10› in
  "paulson-ml2" for further discussion.›


subsubsection ‹Simulating sequents by natural deduction›

text ‹Isabelle can represent sequents directly, as in the
  object-logic LK.  But natural deduction is easier to work with, and
  most object-logics employ it.  Fortunately, we can simulate the
  sequent P1, …, Pm ⊢ Q1, …, Qn by the Isabelle formula
  P1 ⟹ … ⟹ Pm ⟹ ¬ Q2 ⟹ ... ⟹ ¬ Qn ⟹ Q1 where the order of
  the assumptions and the choice of Q1 are arbitrary.
  Elim-resolution plays a key role in simulating sequent proofs.

  We can easily handle reasoning on the left.  Elim-resolution with
  the rules (∨E)›, (⊥E)› and (∃E)› achieves
  a similar effect as the corresponding sequent rules.  For the other
  connectives, we use sequent-style elimination rules instead of
  destruction rules such as (∧E1, 2)› and (∀E)›.
  But note that the rule (¬L)› has no effect under our
  representation of sequents!
  \[
  \infer[(¬L)›]{¬ P, Γ ⊢ Δ›}{Γ ⊢ Δ, P›}
  \]

  What about reasoning on the right?  Introduction rules can only
  affect the formula in the conclusion, namely Q1.  The
  other right-side formulae are represented as negated assumptions,
  ¬ Q2, …, ¬ Qn.  In order to operate on one of these, it
  must first be exchanged with Q1.  Elim-resolution with the
  swap› rule has this effect: ¬ P ⟹ (¬ R ⟹ P) ⟹ R›

  To ensure that swaps occur only when necessary, each introduction
  rule is converted into a swapped form: it is resolved with the
  second premise of (swap)›.  The swapped form of (∧I)›, which might be called (¬∧E)›, is
  @{text [display] "¬ (P ∧ Q) ⟹ (¬ R ⟹ P) ⟹ (¬ R ⟹ Q) ⟹ R"}

  Similarly, the swapped form of (⟶I)› is
  @{text [display] "¬ (P ⟶ Q) ⟹ (¬ R ⟹ P ⟹ Q) ⟹ R"}

  Swapped introduction rules are applied using elim-resolution, which
  deletes the negated formula.  Our representation of sequents also
  requires the use of ordinary introduction rules.  If we had no
  regard for readability of intermediate goal states, we could treat
  the right side more uniformly by representing sequents as @{text
  [display] "P1 ⟹ … ⟹ Pm ⟹ ¬ Q1 ⟹ … ⟹ ¬ Qn ⟹ ⊥"}


subsubsection ‹Extra rules for the sequent calculus›

text ‹As mentioned, destruction rules such as (∧E1, 2)› and
  (∀E)› must be replaced by sequent-style elimination rules.
  In addition, we need rules to embody the classical equivalence
  between P ⟶ Q› and ¬ P ∨ Q›.  The introduction
  rules (∨I1, 2)› are replaced by a rule that simulates
  (∨R)›: @{text [display] "(¬ Q ⟹ P) ⟹ P ∨ Q"}

  The destruction rule (⟶E)› is replaced by @{text [display]
  "(P ⟶ Q) ⟹ (¬ P ⟹ R) ⟹ (Q ⟹ R) ⟹ R"}

  Quantifier replication also requires special rules.  In classical
  logic, ∃x. P x› is equivalent to ¬ (∀x. ¬ P x)›;
  the rules (∃R)› and (∀L)› are dual:
  \[
  \infer[(∃R)›]{Γ ⊢ Δ, ∃x. P x›}{Γ ⊢ Δ, ∃x. P x, P t›}
  \qquad
  \infer[(∀L)›]{∀x. P x, Γ ⊢ Δ›}{P t, ∀x. P x, Γ ⊢ Δ›}
  \]
  Thus both kinds of quantifier may be replicated.  Theorems requiring
  multiple uses of a universal formula are easy to invent; consider
  @{text [display] "(∀x. P x ⟶ P (f x)) ∧ P a ⟶ P (fn a)"} for any
  n > 1›.  Natural examples of the multiple use of an
  existential formula are rare; a standard one is ∃x. ∀y. P x
  ⟶ P y›.

  Forgoing quantifier replication loses completeness, but gains
  decidability, since the search space becomes finite.  Many useful
  theorems can be proved without replication, and the search generally
  delivers its verdict in a reasonable time.  To adopt this approach,
  represent the sequent rules (∃R)›, (∃L)› and
  (∀R)› by (∃I)›, (∃E)› and (∀I)›,
  respectively, and put (∀E)› into elimination form: @{text
  [display] "∀x. P x ⟹ (P t ⟹ Q) ⟹ Q"}

  Elim-resolution with this rule will delete the universal formula
  after a single use.  To replicate universal quantifiers, replace the
  rule by @{text [display] "∀x. P x ⟹ (P t ⟹ ∀x. P x ⟹ Q) ⟹ Q"}

  To replicate existential quantifiers, replace (∃I)› by
  @{text [display] "(¬ (∃x. P x) ⟹ P t) ⟹ ∃x. P x"}

  All introduction rules mentioned above are also useful in swapped
  form.

  Replication makes the search space infinite; we must apply the rules
  with care.  The classical reasoner distinguishes between safe and
  unsafe rules, applying the latter only when there is no alternative.
  Depth-first search may well go down a blind alley; best-first search
  is better behaved in an infinite search space.  However, quantifier
  replication is too expensive to prove any but the simplest theorems.
›


subsection ‹Rule declarations›

text ‹The proof tools of the Classical Reasoner depend on
  collections of rules declared in the context, which are classified
  as introduction, elimination or destruction and as ‹safe› or
  ‹unsafe›.  In general, safe rules can be attempted blindly,
  while unsafe rules must be used with care.  A safe rule must never
  reduce a provable goal to an unprovable set of subgoals.

  The rule P ⟹ P ∨ Q› is unsafe because it reduces P
  ∨ Q› to P›, which might turn out as premature choice of an
  unprovable subgoal.  Any rule whose premises contain new unknowns is
  unsafe. The elimination rule ∀x. P x ⟹ (P t ⟹ Q) ⟹ Q› is
  unsafe, since it is applied via elim-resolution, which discards the
  assumption ∀x. P x› and replaces it by the weaker
  assumption P t›.  The rule P t ⟹ ∃x. P x› is
  unsafe for similar reasons.  The quantifier duplication rule ∀x. P x ⟹ (P t ⟹ ∀x. P x ⟹ Q) ⟹ Q› is unsafe in a different sense:
  since it keeps the assumption ∀x. P x›, it is prone to
  looping.  In classical first-order logic, all rules are safe except
  those mentioned above.

  The safe~/ unsafe distinction is vague, and may be regarded merely
  as a way of giving some rules priority over others.  One could argue
  that (∨E)› is unsafe, because repeated application of it
  could generate exponentially many subgoals.  Induction rules are
  unsafe because inductive proofs are difficult to set up
  automatically.  Any inference that instantiates an unknown
  in the proof state is unsafe --- thus matching must be used, rather than
  unification.  Even proof by assumption is unsafe if it instantiates
  unknowns shared with other subgoals.

  \begin{matharray}{rcl}
    @{command_def "print_claset"}* & : & context →› \\
    @{attribute_def intro} & : & attribute› \\
    @{attribute_def elim} & : & attribute› \\
    @{attribute_def dest} & : & attribute› \\
    @{attribute_def rule} & : & attribute› \\
    @{attribute_def iff} & : & attribute› \\
    @{attribute_def swapped} & : & attribute› \\
  \end{matharray}

  rail‹
    (@@{attribute intro} | @@{attribute elim} | @@{attribute dest}) ('!' | () | '?') @{syntax nat}?
    ;
    @@{attribute rule} 'del'
    ;
    @@{attribute iff} (((() | 'add') '?'?) | 'del')
  ›

   @{command "print_claset"} prints the collection of rules
  declared to the Classical Reasoner, i.e.\ the ML_typeclaset
  within the context.

   @{attribute intro}, @{attribute elim}, and @{attribute dest}
  declare introduction, elimination, and destruction rules,
  respectively.  By default, rules are considered as ‹unsafe›
  (i.e.\ not applied blindly without backtracking), while ``!›'' classifies as ‹safe›.  Rule declarations marked by
  ``?›'' coincide with those of Isabelle/Pure, cf.\
  \secref{sec:pure-meth-att} (i.e.\ are only applied in single steps
  of the @{method rule} method).  The optional natural number
  specifies an explicit weight argument, which is ignored by the
  automated reasoning tools, but determines the search order of single
  rule steps.

  Introduction rules are those that can be applied using ordinary
  resolution.  Their swapped forms are generated internally, which
  will be applied using elim-resolution.  Elimination rules are
  applied using elim-resolution.  Rules are sorted by the number of
  new subgoals they will yield; rules that generate the fewest
  subgoals will be tried first.  Otherwise, later declarations take
  precedence over earlier ones.

  Rules already present in the context with the same classification
  are ignored.  A warning is printed if the rule has already been
  added with some other classification, but the rule is added anyway
  as requested.

   @{attribute rule}~del› deletes all occurrences of a
  rule from the classical context, regardless of its classification as
  introduction~/ elimination~/ destruction and safe~/ unsafe.

   @{attribute iff} declares logical equivalences to the
  Simplifier and the Classical reasoner at the same time.
  Non-conditional rules result in a safe introduction and elimination
  pair; conditional ones are considered unsafe.  Rules with negative
  conclusion are automatically inverted (using ¬›-elimination
  internally).

  The ``?›'' version of @{attribute iff} declares rules to
  the Isabelle/Pure context only, and omits the Simplifier
  declaration.

   @{attribute swapped} turns an introduction rule into an
  elimination, by resolving with the classical swap principle ¬ P ⟹ (¬ R ⟹ P) ⟹ R› in the second position.  This is mainly for
  illustrative purposes: the Classical Reasoner already swaps rules
  internally as explained above.
›


subsection ‹Structured methods›

text ‹
  \begin{matharray}{rcl}
    @{method_def rule} & : & method› \\
    @{method_def contradiction} & : & method› \\
  \end{matharray}

  rail@@{method rule} @{syntax thms}?

   @{method rule} as offered by the Classical Reasoner is a
  refinement over the Pure one (see \secref{sec:pure-meth-att}).  Both
  versions work the same, but the classical version observes the
  classical rule context in addition to that of Isabelle/Pure.

  Common object logics (HOL, ZF, etc.) declare a rich collection of
  classical rules (even if these would qualify as intuitionistic
  ones), but only few declarations to the rule context of
  Isabelle/Pure (\secref{sec:pure-meth-att}).

   @{method contradiction} solves some goal by contradiction,
  deriving any result from both ¬ A› and A›.  Chained
  facts, which are guaranteed to participate, may appear in either
  order.
›


subsection ‹Fully automated methods›

text ‹
  \begin{matharray}{rcl}
    @{method_def blast} & : & method› \\
    @{method_def auto} & : & method› \\
    @{method_def force} & : & method› \\
    @{method_def fast} & : & method› \\
    @{method_def slow} & : & method› \\
    @{method_def best} & : & method› \\
    @{method_def fastforce} & : & method› \\
    @{method_def slowsimp} & : & method› \\
    @{method_def bestsimp} & : & method› \\
    @{method_def deepen} & : & method› \\
  \end{matharray}

  rail@@{method blast} @{syntax nat}? (@{syntax clamod} * )
    ;
    @@{method auto} (@{syntax nat} @{syntax nat})? (@{syntax clasimpmod} * )
    ;
    @@{method force} (@{syntax clasimpmod} * )
    ;
    (@@{method fast} | @@{method slow} | @@{method best}) (@{syntax clamod} * )
    ;
    (@@{method fastforce} | @@{method slowsimp} | @@{method bestsimp})
      (@{syntax clasimpmod} * )
    ;
    @@{method deepen} (@{syntax nat} ?) (@{syntax clamod} * )
    ;
    @{syntax_def clamod}:
      (('intro' | 'elim' | 'dest') ('!' | () | '?') | 'del') ':' @{syntax thms}
    ;
    @{syntax_def clasimpmod}: ('simp' (() | 'add' | 'del' | 'only') |
      'cong' (() | 'add' | 'del') |
      'split' (() | '!' | 'del') |
      'iff' (((() | 'add') '?'?) | 'del') |
      (('intro' | 'elim' | 'dest') ('!' | () | '?') | 'del')) ':' @{syntax thms}

   @{method blast} is a separate classical tableau prover that
  uses the same classical rule declarations as explained before.

  Proof search is coded directly in ML using special data structures.
  A successful proof is then reconstructed using regular Isabelle
  inferences.  It is faster and more powerful than the other classical
  reasoning tools, but has major limitations too.

     It does not use the classical wrapper tacticals, such as the
    integration with the Simplifier of @{method fastforce}.

     It does not perform higher-order unification, as needed by the
    rule @{thm [source=false] rangeI} in HOL.  There are often
    alternatives to such rules, for example @{thm [source=false]
    range_eqI}.

     Function variables may only be applied to parameters of the
    subgoal.  (This restriction arises because the prover does not use
    higher-order unification.)  If other function variables are present
    then the prover will fail with the message
    @{verbatim [display] ‹Function unknown's argument not a bound variable›}

     Its proof strategy is more general than @{method fast} but can
    be slower.  If @{method blast} fails or seems to be running forever,
    try @{method fast} and the other proof tools described below.

  The optional integer argument specifies a bound for the number of
  unsafe steps used in a proof.  By default, @{method blast} starts
  with a bound of 0 and increases it successively to 20.  In contrast,
  (blast lim)› tries to prove the goal using a search bound
  of lim›.  Sometimes a slow proof using @{method blast} can
  be made much faster by supplying the successful search bound to this
  proof method instead.

   @{method auto} combines classical reasoning with
  simplification.  It is intended for situations where there are a lot
  of mostly trivial subgoals; it proves all the easy ones, leaving the
  ones it cannot prove.  Occasionally, attempting to prove the hard
  ones may take a long time.

  The optional depth arguments in (auto m n)› refer to its
  builtin classical reasoning procedures: m› (default 4) is for
  @{method blast}, which is tried first, and n› (default 2) is
  for a slower but more general alternative that also takes wrappers
  into account.

   @{method force} is intended to prove the first subgoal
  completely, using many fancy proof tools and performing a rather
  exhaustive search.  As a result, proof attempts may take rather long
  or diverge easily.

   @{method fast}, @{method best}, @{method slow} attempt to
  prove the first subgoal using sequent-style reasoning as explained
  before.  Unlike @{method blast}, they construct proofs directly in
  Isabelle.

  There is a difference in search strategy and back-tracking: @{method
  fast} uses depth-first search and @{method best} uses best-first
  search (guided by a heuristic function: normally the total size of
  the proof state).

  Method @{method slow} is like @{method fast}, but conducts a broader
  search: it may, when backtracking from a failed proof attempt, undo
  even the step of proving a subgoal by assumption.

   @{method fastforce}, @{method slowsimp}, @{method bestsimp}
  are like @{method fast}, @{method slow}, @{method best},
  respectively, but use the Simplifier as additional wrapper. The name
  @{method fastforce}, reflects the behaviour of this popular method
  better without requiring an understanding of its implementation.

   @{method deepen} works by exhaustive search up to a certain
  depth.  The start depth is 4 (unless specified explicitly), and the
  depth is increased iteratively up to 10.  Unsafe rules are modified
  to preserve the formula they act on, so that it be used repeatedly.
  This method can prove more goals than @{method fast}, but is much
  slower, for example if the assumptions have many universal
  quantifiers.


  Any of the above methods support additional modifiers of the context
  of classical (and simplifier) rules, but the ones related to the
  Simplifier are explicitly prefixed by simp› here.  The
  semantics of these ad-hoc rule declarations is analogous to the
  attributes given before.  Facts provided by forward chaining are
  inserted into the goal before commencing proof search.
›


subsection ‹Partially automated methods\label{sec:classical:partial}›

text ‹These proof methods may help in situations when the
  fully-automated tools fail.  The result is a simpler subgoal that
  can be tackled by other means, such as by manual instantiation of
  quantifiers.

  \begin{matharray}{rcl}
    @{method_def safe} & : & method› \\
    @{method_def clarify} & : & method› \\
    @{method_def clarsimp} & : & method› \\
  \end{matharray}

  rail‹
    (@@{method safe} | @@{method clarify}) (@{syntax clamod} * )
    ;
    @@{method clarsimp} (@{syntax clasimpmod} * )
  ›

   @{method safe} repeatedly performs safe steps on all subgoals.
  It is deterministic, with at most one outcome.

   @{method clarify} performs a series of safe steps without
  splitting subgoals; see also @{method clarify_step}.

   @{method clarsimp} acts like @{method clarify}, but also does
  simplification.  Note that if the Simplifier context includes a
  splitter for the premises, the subgoal may still be split.
›


subsection ‹Single-step tactics›

text ‹
  \begin{matharray}{rcl}
    @{method_def safe_step} & : & method› \\
    @{method_def inst_step} & : & method› \\
    @{method_def step} & : & method› \\
    @{method_def slow_step} & : & method› \\
    @{method_def clarify_step} & : &  method› \\
  \end{matharray}

  These are the primitive tactics behind the automated proof methods
  of the Classical Reasoner.  By calling them yourself, you can
  execute these procedures one step at a time.

   @{method safe_step} performs a safe step on the first subgoal.
  The safe wrapper tacticals are applied to a tactic that may include
  proof by assumption or Modus Ponens (taking care not to instantiate
  unknowns), or substitution.

   @{method inst_step} is like @{method safe_step}, but allows
  unknowns to be instantiated.

   @{method step} is the basic step of the proof procedure, it
  operates on the first subgoal.  The unsafe wrapper tacticals are
  applied to a tactic that tries @{method safe}, @{method inst_step},
  or applies an unsafe rule from the context.

   @{method slow_step} resembles @{method step}, but allows
  backtracking between using safe rules with instantiation (@{method
  inst_step}) and using unsafe rules.  The resulting search space is
  larger.

   @{method clarify_step} performs a safe step on the first
  subgoal; no splitting step is applied.  For example, the subgoal
  A ∧ B› is left as a conjunction.  Proof by assumption,
  Modus Ponens, etc., may be performed provided they do not
  instantiate unknowns.  Assumptions of the form x = t› may
  be eliminated.  The safe wrapper tactical is applied.
›


subsection ‹Modifying the search step›

text ‹
  \begin{mldecls}
    @{define_ML_type wrapper = "(int -> tactic) -> (int -> tactic)"} \\[0.5ex]
    @{define_ML_infix addSWrapper: "Proof.context *
  (string * (Proof.context -> wrapper)) -> Proof.context"} \\
    @{define_ML_infix addSbefore: "Proof.context *
  (string * (Proof.context -> int -> tactic)) -> Proof.context"} \\
    @{define_ML_infix addSafter: "Proof.context *
  (string * (Proof.context -> int -> tactic)) -> Proof.context"} \\
    @{define_ML_infix delSWrapper: "Proof.context * string -> Proof.context"} \\[0.5ex]
    @{define_ML_infix addWrapper: "Proof.context *
  (string * (Proof.context -> wrapper)) -> Proof.context"} \\
    @{define_ML_infix addbefore: "Proof.context *
  (string * (Proof.context -> int -> tactic)) -> Proof.context"} \\
    @{define_ML_infix addafter: "Proof.context *
  (string * (Proof.context -> int -> tactic)) -> Proof.context"} \\
    @{define_ML_infix delWrapper: "Proof.context * string -> Proof.context"} \\[0.5ex]
    @{define_ML addSss: "Proof.context -> Proof.context"} \\
    @{define_ML addss: "Proof.context -> Proof.context"} \\
  \end{mldecls}

  The proof strategy of the Classical Reasoner is simple.  Perform as
  many safe inferences as possible; or else, apply certain safe rules,
  allowing instantiation of unknowns; or else, apply an unsafe rule.
  The tactics also eliminate assumptions of the form x = t›
  by substitution if they have been set up to do so.  They may perform
  a form of Modus Ponens: if there are assumptions P ⟶ Q› and
  P›, then replace P ⟶ Q› by Q›.

  The classical reasoning tools --- except @{method blast} --- allow
  to modify this basic proof strategy by applying two lists of
  arbitrary ‹wrapper tacticals› to it.  The first wrapper list,
  which is considered to contain safe wrappers only, affects @{method
  safe_step} and all the tactics that call it.  The second one, which
  may contain unsafe wrappers, affects the unsafe parts of @{method
  step}, @{method slow_step}, and the tactics that call them.  A
  wrapper transforms each step of the search, for example by
  attempting other tactics before or after the original step tactic.
  All members of a wrapper list are applied in turn to the respective
  step tactic.

  Initially the two wrapper lists are empty, which means no
  modification of the step tactics. Safe and unsafe wrappers are added
  to the context with the functions given below, supplying them with
  wrapper names.  These names may be used to selectively delete
  wrappers.

   ctxt addSWrapper (name, wrapper)› adds a new wrapper,
  which should yield a safe tactic, to modify the existing safe step
  tactic.

   ctxt addSbefore (name, tac)› adds the given tactic as a
  safe wrapper, such that it is tried ‹before› each safe step of
  the search.

   ctxt addSafter (name, tac)› adds the given tactic as a
  safe wrapper, such that it is tried when a safe step of the search
  would fail.

   ctxt delSWrapper name› deletes the safe wrapper with
  the given name.

   ctxt addWrapper (name, wrapper)› adds a new wrapper to
  modify the existing (unsafe) step tactic.

   ctxt addbefore (name, tac)› adds the given tactic as an
  unsafe wrapper, such that it its result is concatenated
  ‹before› the result of each unsafe step.

   ctxt addafter (name, tac)› adds the given tactic as an
  unsafe wrapper, such that it its result is concatenated ‹after›
  the result of each unsafe step.

   ctxt delWrapper name› deletes the unsafe wrapper with
  the given name.

   addSss› adds the simpset of the context to its
  classical set. The assumptions and goal will be simplified, in a
  rather safe way, after each safe step of the search.

   addss› adds the simpset of the context to its
  classical set. The assumptions and goal will be simplified, before
  the each unsafe step of the search.
›


section ‹Object-logic setup \label{sec:object-logic}›

text ‹
  \begin{matharray}{rcl}
    @{command_def "judgment"} & : & theory → theory› \\
    @{method_def atomize} & : & method› \\
    @{attribute_def atomize} & : & attribute› \\
    @{attribute_def rule_format} & : & attribute› \\
    @{attribute_def rulify} & : & attribute› \\
  \end{matharray}

  The very starting point for any Isabelle object-logic is a ``truth
  judgment'' that links object-level statements to the meta-logic
  (with its minimal language of prop› that covers universal
  quantification ⋀› and implication ⟹›).

  Common object-logics are sufficiently expressive to internalize rule
  statements over ⋀› and ⟹› within their own
  language.  This is useful in certain situations where a rule needs
  to be viewed as an atomic statement from the meta-level perspective,
  e.g.\ ⋀x. x ∈ A ⟹ P x› versus ∀x ∈ A. P x›.

  From the following language elements, only the @{method atomize}
  method and @{attribute rule_format} attribute are occasionally
  required by end-users, the rest is for those who need to setup their
  own object-logic.  In the latter case existing formulations of
  Isabelle/FOL or Isabelle/HOL may be taken as realistic examples.

  Generic tools may refer to the information provided by object-logic
  declarations internally.

  rail@@{command judgment} @{syntax name} '::' @{syntax type} @{syntax mixfix}?
    ;
    @@{attribute atomize} ('(' 'full' ')')?
    ;
    @@{attribute rule_format} ('(' 'noasm' ')')?

   @{command "judgment"}~c :: σ (mx)› declares constant
  c› as the truth judgment of the current object-logic.  Its
  type σ› should specify a coercion of the category of
  object-level propositions to prop› of the Pure meta-logic;
  the mixfix annotation (mx)› would typically just link the
  object language (internally of syntactic category logic›)
  with that of prop›.  Only one @{command "judgment"}
  declaration may be given in any theory development.
  
   @{method atomize} (as a method) rewrites any non-atomic
  premises of a sub-goal, using the meta-level equations declared via
  @{attribute atomize} (as an attribute) beforehand.  As a result,
  heavily nested goals become amenable to fundamental operations such
  as resolution (cf.\ the @{method (Pure) rule} method).  Giving the ``(full)›'' option here means to turn the whole subgoal into an
  object-statement (if possible), including the outermost parameters
  and assumptions as well.

  A typical collection of @{attribute atomize} rules for a particular
  object-logic would provide an internalization for each of the
  connectives of ⋀›, ⟹›, and ≡›.
  Meta-level conjunction should be covered as well (this is
  particularly important for locales, see \secref{sec:locale}).

   @{attribute rule_format} rewrites a theorem by the equalities
  declared as @{attribute rulify} rules in the current object-logic.
  By default, the result is fully normalized, including assumptions
  and conclusions at any depth.  The (no_asm)› option
  restricts the transformation to the conclusion of a rule.

  In common object-logics (HOL, FOL, ZF), the effect of @{attribute
  rule_format} is to replace (bounded) universal quantification
  (∀›) and implication (⟶›) by the corresponding
  rule statements over ⋀› and ⟹›.
›


section ‹Tracing higher-order unification›

text ‹
  \begin{tabular}{rcll}
    @{attribute_def unify_trace} & : & attribute› & default false› \\
    @{attribute_def unify_trace_simp} & : & attribute› & default false› \\
    @{attribute_def unify_trace_types} & : & attribute› & default false› \\
    @{attribute_def unify_trace_bound} & : & attribute› & default 50› \\
    @{attribute_def unify_search_bound} & : & attribute› & default 60› \\
  \end{tabular}
  

  Higher-order unification works well in most practical situations,
  but sometimes needs extra care to identify problems.  These tracing
  options may help.

   @{attribute unify_trace} controls whether unify trace messages will be 
  printed (controlled via more fine-grained tracing options below).

   @{attribute unify_trace_simp} controls tracing of the
  simplification phase of higher-order unification.

   @{attribute unify_trace_types} controls tracing of potential
  incompleteness, when unification is not considering all possible
  instantiations of schematic type variables.

   @{attribute unify_trace_bound} determines the depth where
  unification starts to print tracing information once it reaches
  depth; 0 for full tracing.  At the default value, tracing
  information is almost never printed in practice.

   @{attribute unify_search_bound} prevents unification from
  searching past the given depth.  Because of this bound, higher-order
  unification cannot return an infinite sequence, though it can return
  an exponentially long one.  The search rarely approaches the default
  value in practice.  If the search is cut off, unification prints a
  warning ``Unification bound exceeded''.


  \begin{warn}
  Options for unification cannot be modified in a local context.  Only
  the global theory content is taken into account.
  \end{warn}
›

end