The authors present a related family of authentication and digital signature protocols based on symmetric cryptographic primitives which perform substantially better than previous constructions. They show that given online access to a timestamping service, we can sign messages using only two computations of a hash function. It is also shown that in many realistic scenarios a small number of hash function computations is sufficient. The Diffie Hellman protocol enabled two principals to create a confidentiality key from scratch: the authors provide an equivalent protocol for integrity, which enables two people who do not share a secret to set up a securely serialised channel into which attackers cannot subsequently intrude. The protocol construction also raises interesting questions about the definition of a digital signature, and the relationship between integrity and authenticity.
The article elaborates on how technology changes have eroded the assumptions on which the design of cryptographic protocols began in the 1970s. For example, computers are now personal rather than shared; disk space and memory are cheap; and the requirement for confidentiality is often firewall-to-firewall rather than user-to-user. It follows that many requirements could be met with quite simple and robust solutions, such as the use of one-time pads to protect traffic between corporations that do business with each other. One consequence will be that much of the supposed requirement for trusted third parties will simply drop away.
The authors describe a protocol proposed for protecting British government email by GCHQ. It is based on the escrow proposal and the NSA Message Security Protocol, but with some modifications and extensions. They point out a number of weaknesses, such that it does no more than Kerberos does but at a much higher cost; that it is over-centralised and will scale poorly; that signing keys are distributed under escrowed confidentiality keys, so non-repudiation is lost and leaked official documents can be dismissed as forgeries; that security labels are sent in clear, creating a traffic analysis vulnerability; and that keys are identity based, so that a user whose key is compromised has to change his name.
The authors present a new proactive password checking technique based on decision trees; the idea is to help users avoid choosing weak passwords, but without the overhead of an enormous dictionary. Using decision trees as password classifiers, they managed to represent dictionaries with a compression factor of 100 and an error of 1%. An accurate comparative evaluation with existing proactive password checkers is also presented. The authors have released a public domain implementation of their checker.
An innovative way of arranging small cash (either physical or electronic) payments through zero average loss betting is outlined: to pay 63c with a $1 bill, one simply gambles with the merchant with appropriate odds. This would reduce exchange and transfer costs, though it would also require changes not only to schemes design, but also to some of their underlying principles.
The authors discuss how key certification services can be made robust against undetected failure. They present a design in which separation of duty ensures that at least two entities out of the three (user, certification authority and a separate revocation authority) have to collaborate to enforce a change in a certificate or in the evidence. The design lays great emphasis on evidence being fully verifiable, with hash chaining to prevent re-ordering and other manipulation of certificates and logs.
The modern era only started once the printing press enabled seditious thoughts to be spread too widely to ban; yet the move to electronic publication means that many works are available on only a few servers, whose owners can be sued or coerced. In order to create a true electronic equivalent of book publishing, the author suggests using suitable protocols to build a distributed file store that would be highly resistant to denial-of-service attacks. A design is sketched combining mechanisms such as fragmentation, redundancy, scattering and anonymity. This raises a number of research questions such as secure time, digital annuities, and the relationship between anonymity and resilience.
The authors show that ElGamal signatures modulo a prime p can be decomposed into separate signatures in the subgroups of Z_p^*. The signing key may be findable using discrete log computation techniques, or deliberately shared with some third party, module some of the distinct prime power factors of p-1 but not others. This gives rise to broadcast and narrowcast subliminal channels respectively. The construction settles in the negative a conjecture of Simmons that all broadband subliminal channels involve compromising the signing key. It also shows that the US digital signature standard is designed to minimise the subliminal channel capacity, rather than maximise it as had previously been thought.
The authors present eleven principles to help crypto protocol designers avoid the most common serious errors. The principles help ensure that messages mean what they say, and that the conditions for them to be acted on are clear. Many of them have to do with aspects of typing, especially when encryption is used, while others have to do with ensuring freshness. A new attack, incorporated since the conference paper, breaks an early version of SSL. This paper was originally written as the DEC SRC Research Report 125.
The authors present a number of attacks on public key protocols. For example, where messages are first signed and then encrypted, it is often possible to find another plaintext that encrypts to the signed ciphertext using a different public key; this provides attacks on X.509 and the draft ISO 11770. There is also a simplification of Burmesters attack on authenticated key exchange, and a discussion of the lack of redundancy in ISO 11166 key certificates and the effect this has on prudent implementation. Eight principles are presented that would have prevented these attacks, together with other existing ones; these principles may be used to avoid design errors and to find attacks.
The author describes an EC project to pilot authentication and security services; PEM, X.400 and X.500 were implemented separately by UK, French and German researchers, and this helped to find and fix ambiguities in the standards documents. The project has concluded that the PEM certification hierarchy is unworkable, as it assumes that a single orgnanisation can be trusted to control the entire worlds key distribution system.
The authors provide an overview of cryptographic protocols at the level of a general scientific audience. They describe some practical cases in which protocols failure has led to real attacks on banking, electricity meter and pay-per-view systems; they provide new attacks, on the wide-mouthed frog and on a protocol of Woo and Lam; and discuss the relative merits of protocol logics and robustness principles. Designing crypto protocols is called programming Satans computer because it amounts to writing a program to run on a machine that will give the the most subtle wrong answer at the most inconvenient moment. The guiding principle of this black art is to make as many things explicit as possible.
The author considers the engineering aspects of using software encryption in a LAN. As most network traffic is bursty, a considerable speed improvement can be achieved by using idle cycles to precompute keystream using DES in OFB mode. With Ethernet, performance also depends on packet size to the extent that tuning this can often make up the residual cost of software encryption. Thus, even with DES, well-designed encryption software can often be almost free of performance costs.
This talk considers the problem of how an organisation can recover from a security disaster (such as a security officer defecting, an algorithm being broken, or a key distribution centre being subverted). It turns out that different protocols have different recovery properties; public key systems are generally more resilient, whereas systems based on symmetric algorithms and a key server (such as Kerberos) are vulnerable to theft or judicial compulsion of the server. Online servers, on the other hand, allow user privileges to be changed more quickly in the event of a defection. The SDNS system, whose servers provide daily changes of public keys for users, gives the best of both worlds.
The author proposes making Pipers trapdoor more practical by choosing RSA modulus primes from an arithmetic sequence. Such a booby-trapped modulus can be factored more easily by anyone knowing the parameters of the sequence.
Server-aided protocols enable a device such as a smartcard to speed up computations using insecure auxiliary devices. A protocol of Matsumoto, Kato and Imai is shown to be insecure: instead of returning partial signatures for the desired message, the server can instead send back a set of values designed to discover the card's secret key. The implication is that the smartcard must check the signature before releasing it, and this makes such protocols less practical.