Informal arguments that cryptographic protocols are secure can be made rigorous using inductive definitions. The approach is based on ordinary predicate calculus and copes with infinite-state systems. Proofs are generated using Isabelle/HOL. Protocols are inductively defined as sets of traces. A trace is a list of communication events, perhaps comprising many interleaved protocol runs. Protocol descriptions incorporate attacks and accidental losses. The model spy knows some private keys and can forge messages using components decrypted from previous traffic. Three protocols are analyzed below: Otway-Rees (which uses shared-key encryption), Needham-Schroeder (which uses public-key encryption), and a recursive protocol (which is of variable length).
Internet browsers use security protocols to protect confidential messages. An inductive analysis of TLS (a descendant of SSL 3.0) has been performed using the theorem prover Isabelle. Proofs are based on higher-order logic and make no assumptions concerning beliefs or finiteness. All the obvious security goals can be proved; session resumption appears to be secure even if old session keys have been compromised. The analysis suggests modest changes to simplify the protocol. TLS, even at an abstract level, is much more complicated than most protocols that researchers have verified. Session keys are negotiated rather than distributed, and the protocol has many optional parts. Nevertheless, the resources needed to verify TLS are modest. The inductive approach scales up.
A new protocol based on Otway-Rees is presented; it can deliver session keys to an unlimited number of neighbouring agents through message nesting. An analysis of this protocol is given in detail, using the approach presented in the paper below.
The use of induction for analysing and proving properties of cryptographic protocols can help derive guarantees from protocol messages and explicit assumptions. The paper suggests a theory for message analysis with three operators, which are used to automate protocol proofs using the higher logic theorem prover Isabelle/HOL. Proof commands are discussed and the Otway-Rees protocol analysed.
The Yahalom protocol is one of those analyzed by Burrows et al. in the BAN paper. Based upon their analysis, they have proposed modifications to make the protocol easier to understand and analyze. Both versions of Yahalom have now been proved, using Isabelle/HOL, to satisfy strong security goals. The mathematical reasoning behind these machine proofs is presented informally. The new proofs do not rely on a belief logic; they use an entirely different formal model, the inductive method. They confirm the BAN analysis and the advantages of the proposed modifications. The new proof methods detect more flaws than BAN and analyze protocols in finer detail, while remaining broadly consistent with the BAN principles. In particular, the proofs confirm the explicitness principle of Abadi and Needham.
The authors present a general theory of possibilistic security properties. They show that we can express a security property as a predicate that is true of every set containing all the traces with the same low level event sequence. Given this security predicate, they show how to construct a partial ordering of security properties. They also discuss information flow and present the weakest property such that no information can flow from high level users to low level users. Finally, they present a comparison of our framework and McLean’s Selective Interleaving Functions framework.
They introduced the spi calculus, an extension of the pi calculus designed for the description and analysis of cryptographic protocols. They showed how to use the spi calculus, particularly for studying authentication protocols. The pi calculus (without extension) suffices for some abstract protocols; the spi calculus enables us to consider cryptographic issues in more detail. They represented protocols as processes in the spi calculus and state their security properties in terms of coarse-grained notions of protocol equivalence.
RM Needham defines a pure name to be ``nothing but a bit pattern that is an identifier, and is only useful for comparing for identity with other bit patterns - which includes looking up in tables in order to find other information''. In this paper, we argue that pure names are relevant to both security and mobility. Nominal calculi are computational formalisms that include pure names and the dynamic generation of fresh, unguessable names. We survey recent work on nominal calculi with primitives representing location failure, process migration and cryptography, and suggest areas for further work.
The inductive approach to verifying security protocols, previously applied to shared-key encryption, is here applied to the public key version of the Needham-Schroeder protocol. As before, mechanized proofs are performed using Isabelle/HOL. Both the original, flawed version and Lowe's improved version are studied; the properties proved highlight the distinctions between the two versions. The results are compared with previous analyses of the same protocol. The analysis reported below required only 30 hours of the author's time. The proof scripts execute in under three minutes.
This paper concerns a form of non-interference for event systems, similar to McCulloughs generalized non-interference. This property is not generally composable, but this paper shows that it is preserved by compositions in which any feedback is interrupted by a delay.
The authors call two security policies compatible at a given system if it satisfies them both simultaneously. Policies can be incompatible for environmental as well as technical reasons, and a model is developed with reference to railway signalling. This model is then applied to discuss confidentiality, integrity and faithfulness (that the same inputs will give the same outputs); these are shown to be compatible.
The authors propose a new definition of confidentiality in multilevel systems, which is based on formal modelling by event systems and determ inistic regular parsable grammars. The goal is to build a structure in which composability can be dealt with in a coherent and rigorous manner. Like Lins behavioural security model, it is based on input-output causality and requires that high-level input events never be prerequisites for low-level output events.
This paper is the original paper of BAN logic.
Authors' abstract is as follows:
"Questions of belief are essential in analyzing protocols for authentication in distributed computing systems. In this paper we motivate, set out, and exemplify a logic specifically designed for this analysis; we show how various protocols differ subtly with respect to the required initial assumptions of the participants and their final beliefs. Our formalism has enabled us to isolate and express these differences with a precision that was not previously possible. It has drawn attention to features of protocols of which we and their authors were previously unaware, and allowed us to suggest improvements to the protocols. The reasoning about some protocols has been mechanically verified.
This paper starts with an informal account of the problem, goes on to explain the formalism to be used, and gives examples of its application to protocols from the literature, both with conventional shared-key cryptography and with public-key cryptography. Some of the examples are chosen because of their practical importance, while others serve to illustrate subtle points of the logic and to explain how we use it. We discuss extensions of the logic motivated by actual practice - for example, in order to account for the use of hash functions in signatures. The final sections contain a formal semantics of the logic and some conclusions."
This report starts with an explanation and analysis of the American TCSEC/TPSEC and the European ITSEC criteria, and compares the levels of trustworthiness achieved: ITSEC is less detailed, but broader in its scope and more flexible. It proceeds to highlight the difference between guarantee (defined as the design quality of components) and assurance (defined as the quality of the system design and implementation process). This distinction is considered in a number of contexts, such as with security standards (where both TCSEC and ITSEC set out to achieve assurance but tend to deliver only guarantee). It goes on to consider what formal modelling resources are available to a designer who seeks a high level of assurance about a system composed of trusted components, and examines the approaches of McCullough, Johnson & Thayer, Guttman & Nadel, Yu & Gligor, Clark & Wilson, Sutherland and McLean. All of these suffer drawbacks of one kind or another, and finally an alternative is produced. This is called the possibilistic flow model, and a theorem is proved to the effect that this model is composable.a
The proliferation of passwords has become a problem, and if users have too many passwords, then they will use weak ones or write some of them down. It is proposed instead that there should be only one logon, to the local workstation, which should then in turn log on to remote hosts using machine generated passwords. This can be further strengthened by encryption.
An extension of the BAN logic is presented which differentiates between objects which have been freshly created, and objects which have been freshly used. Freshness can be transferred from the first of these to the second, and if an encrypted message contains a freshly created item, then the key can be considered freshly used. An implementation of this logic is also reported.