Why Cryptosystems Fail

Ross Anderson
University Computer Laboratory
Pembroke Street, Cambridge CB2 3QG
Email: rja14@cl.cam.ac.uk

This paper is also available in pdf.

Abstract:

Designers of cryptographic systems are at a disadvantage to most other engineers, in that information on how their systems fail is hard to get: their major users have traditionally been government agencies, which are very secretive about their mistakes.

In this article, we present the results of a survey of the failure modes of retail banking systems, which constitute the next largest application of cryptology. It turns out that the threat model commonly used by cryptosystem designers was wrong: most frauds were not caused by cryptanalysis or other technical attacks, but by implementation errors and management failures. This suggests that a paradigm shift is overdue in computer security; we look at some of the alternatives, and see some signs that this shift may be getting under way.

Introduction

Cryptology, the science of code and cipher systems, is used by governments, banks and other organisations to keep information secure. It is a complex subject, and its national security overtones may invest it with a certain amount of glamour, but we should never forget that information security is at heart an engineering problem. The hardware and software products which are designed to solve it should in principle be judged in the same way as any other products: by their cost and effectiveness.

However, the practice of cryptology differs from, say, that of aeronautical engineering in a rather striking way: there is almost no public feedback about how cryptographic systems fail.

When an aircraft crashes, it is front page news. Teams of investigators rush to the scene, and the subsequent enquiries are conducted by experts from organisations with a wide range of interests - the carrier, the insurer, the manufacturer, the airline pilots' union, and the local aviation authority. Their findings are examined by journalists and politicians, discussed in pilots' messes, and passed on by flying instructors.

In short, the flying community has a strong and institutionalised learning mechanism. This is perhaps the main reason why, despite the inherent hazards of flying in large aircraft, which are maintained and piloted by fallible human beings, at hundreds of miles an hour through congested airspace, in bad weather and at night, the risk of being killed on an air journey is only about one in a million.

In the crypto community, on the other hand, there is no such learning mechanism. The history of the subject ([K1], [W1]) shows the same mistakes being made over and over again; in particular, poor management of codebooks and cipher machine procedures enabled many communication networks to be broken. Kahn relates, for example [K1, p 484], that Norway's rapid fall in the second world war was largely due to the fact that the British Royal Navy's codes had been solved by the German Beobachtungsdienst - using exactly the same techniques that the Royal Navy's own `Room 40' had used against Germany in the previous war.

Since world war two, a curtain of silence has descended on government use of cryptography. This is not surprising, given not just the cold war, but also the reluctance of bureaucrats (in whatever organisation) to admit their failures. But it does put the cryptosystem designer at a severe disadvantage compared with engineers working in other disciplines; the post-war years are precisely the period in which modern cryptographic systems have been developed and brought into use. It is as if accident reports were only published for piston-engined aircraft, and the causes of all jet aircraft crashes were kept a state secret.

Automatic Teller Machines

To discover out how modern cryptosystems are vulnerable in practice, we have to study their use elsewhere. After government, the next biggest application is in banking, and evolved to protect automatic teller machines (ATMs) from fraud.

In some countries (including the USA), the banks have to carry the risks associated with new technology. Following a legal precedent, in which a bank customer's word that she had not made a withdrawal was found to outweigh the banks' experts' word that she must have done [JC], the US Federal Reserve passed regulations which require banks to refund all disputed transactions unless they can prove fraud by the customer [E]. This has led to some minor abuse - misrepresentations by customers are estimated to cost the average US bank about $15,000 a year [W2] - but it has helped promote the development of security technologies such as cryptology and video.

In Britain, the regulators and courts have not yet been so demanding, and despite a parliamentary commission of enquiry which found that the PIN system was insecure [J1], bankers simply deny that their systems are ever at fault. Customers who complain about debits on their accounts for which they were not responsible - so-called `phantom withdrawals' - are told that they are lying, or mistaken, or that they must have been defrauded by their friends or relatives.

The most visible result in the UK has been a string of court cases, both civil and criminal. The pattern which emerges leads us to suspect that there may have been a number of miscarriages of justice over the years.

Finally, in 1992, a large class action got underway in the High Court in London [MB], in which hundreds of plaintiffs seek to recover damages from various banks and building societies. We were retained by the plaintiffs to provide expert advice, and accordingly conducted some research during 1992 into the actual and possible failure modes of automatic teller machine systems. This involved interviewing former bank employees and criminals, analysing statements from plaintiffs and other victims of ATM fraud, and searching the literature. We were also able to draw on experience gained during the mid-80's on designing cryptographic equipment for the financial sector, and advising clients overseas on its use.

We shall now examine some of the ways in which ATM systems have actually been defrauded. We will then compare them with how the designers thought their products might in theory be vulnerable, and see what lessons can be drawn. Some material has had to be held back for legal reasons, and in particular we do not identify all the banks whose mistakes we discuss. This information should be provided by witnesses at trial, and its absence here should have no effect on the points we wish to make.

How ATM Fraud Takes Place

We will start with some simple examples which indicate the variety of frauds that can be carried out without any great technical sophistication, and the bank operating procedures which let them happen. For the time being, we may consider that the magnetic strip on the customer's card contains only his account number, and that his personal identification number (PIN) is derived by encrypting this account number and taking four digits from the result. Thus the ATM must be able to perform this encryption operation, or to check the PIN in some other way (such as by an online enquiry).

Some simple examples

1.
Many frauds are carried out with some inside knowledge or access, and ATM fraud turns out to be no exception. Banks in the English speaking world dismiss about one percent of their staff every year for disciplinary reasons, and many of these sackings are for petty thefts in which ATMs can easily be involved. A bank with 50,000 staff, which issued cards and PINs through the branches rather than by post, might expect about two incidents per business day of staff stealing cards and PINs.

Most thefts by staff show up as phantom withdrawals at ATMs in the victim's neighbourhood. English banks maintain that a computer security problem would result in a random distribution of transactions round the country, and as most disputed withdrawals happen near the customer's home or place of work, these must be due to cardholder negligence [BB]. Thus the pattern of complaints which arises from thefts by their own staff only tends to reinforce the banks' complacency about their systems.

2.
Outsiders have also enjoyed some success at attacking ATM systems.

3.
The point of using a four-digit PIN is that someone who finds or steals another person's ATM card has a chance of only one in ten thousand of guessing the PIN, and if only three attempts are allowed, then the likelihood of a stolen card being misused should be less than one in 3,000. However, some banks have managed to reduce the diversity of a four-digit PIN to much less than 10,000. For example:

4.
Some banks do not derive the PIN from the account number by encryption, but rather chose random PINs (or let the customers choose them) and then encrypt them for storage. Quite apart from the risk that customers may choose PINs which are easy to guess, this has a number of technical pitfalls.

For this reason, VISA recommends that banks should combine the customer's account number with the PIN before encryption [VSM]. Not all of them do.

5.
Despite all these horrors, Britain is by no means the country worst affected by card forgery. That dubious honour goes to Italy [L2], where losses amount to almost 0.5% of ATM turnover. Banks there are basically suffering from two problems.

More complex attacks

The frauds which we have described so far have all been due to fairly simple errors of implementation and operation. Security researchers have tended to consider such blunders uninteresting, and have therefore concentrated on attacks which exploit more subtle technical weaknesses. Banking systems have a number of these weaknesses too.

Although high-tech attacks on banking systems are rare, they are of interest from the public policy point of view, as government initiatives such as the EC's Information Technology Security Evaluation Criteria [ITSEC] aim to develop a pool of evaluated products which have been certified free of known technical loopholes.

The basic assumptions behind this program are that implementation and operation will be essentially error-free, and that attackers will possess the technical skills which are available in a government signals security agency. It would therefore seem to be more relevant to military than civilian systems, although we will have more to say on this later.

In order to understand how these sophisticated attacks might work, we must look at banking security systems in a little more detail.

How ATM encryption works

Most ATMs operate using some variant of a system developed by IBM, which is documented in [MM]. This uses a secret key, called the `PIN key', to derive the PIN from the account number, by means of a published algorithm known as the Data Encryption Standard, or DES. The result of this operation is called the `natural PIN'; an offset can be added to it in order to give the PIN which the customer must enter. The offset has no real cryptographic function; it just enables customers to choose their own PIN. Here is an example of the process:


		 Account number: 	 8807012345691715 
PIN key: FEFEFEFEFEFEFEFE
Result of DES: A2CE126C69AEC82D
Result decimalised: 0224126269042823
Natural PIN: 0224
Offset: 6565
Customer PIN: 6789

It is clear that the security of the system depends on keeping the PIN key absolutely secret. The usual strategy is to supply a `terminal key' to each ATM in the form of two printed components, which are carried to the branch by two separate officials, input at the ATM keyboard, and combined to form the key. The PIN key, encrypted under this terminal key, is then sent to the ATM by the bank's central computer.

If the bank joins a network, so that customers of other banks can use its ATMs, then the picture becomes more complex still. `Foreign' PINs must be encrypted at the ATM using a `working' key it shares with its own bank, where they are decrypted and immediately re-encrypted using another working key shared with the card issuing bank.

These working keys in turn have to be protected, and the usual arrangement is that a bank will share a `zone key' with other banks or with a network switch, and use this to encrypt fresh working keys which are set up each morning. It may also send a fresh working key every day to each of its ATMs, by encrypting it under the ATM's terminal key.

A much fuller description of banking security systems can be found in books such as [DP] and [MM], and in equipment manuals such as [VSM] and [NSM]. All we really need to know is that a bank has a number of keys which it must keep secret. The most important of these is of course the PIN key, as anyone who gets hold of this can forge a card for any customer's account; but other keys (such as terminal keys, zone keys and working keys) could also be used, together with a wiretap, to find out customer PINs in large numbers.

Keeping keys secret is only part of the problem. They must also be available for use at all times by authorised processes. The PIN key is needed all the time to verify transactions, as are the current working keys; the terminal keys and zone keys are less critical, but are still used once a day to set up new working keys.

The original IBM encryption products, such as PCF and the 3848, did not solve the problem: they only did the encryption step, and left the other manipulations to a mainframe computer program, which each bank had to write anew for itself. Thus the security depended on the skill and integrity of each bank's system development and maintenance staff.

The standard approach nowadays is to use a device called a security module. This is basically a PC in a safe, and it is programmed to manage all the bank's keys and PINs in such a way that the mainframe programmers only ever see a key or PIN in encrypted form. Banks which belong to the VISA and Mastercard ATM networks are supposed to use security modules, in order to prevent any bank customer's PIN becoming known to a programmer working for another bank (the Mastercard security requirements are quoted in [MM]; for VISA see [VSM]).

Problems with encryption products

In practice, there are a number of problems with encryption products, whether the old 3848s or the security modules now recommended by banking organisations. No full list of these problems, whether actual or potential, appears to have been published anywhere, but they include at least the following which have come to our notice:

1.
Although VISA and Mastercard have about 10,000 member banks in the USA and at least 1,000 of these do their own processing, enquiries to security module salesmen reveal that only 300 of these processing centres had actually bought and installed these devices by late 1990. The first problem is thus that the hardware version of the product does not get bought at all, either because it is felt to be too expensive, or because it seems to be too difficult and time-consuming to install, or because it was not supplied by IBM (whose own security module product, the 4753, only became available in 1990). Where a bank has no security modules, the PIN encryption functions will typically be performed in software, with a number of undesirable consequences.

2.
The `buy-IBM-or-else' policy of many banks has backfired in more subtle ways. One bank had a policy that only IBM 3178 terminals could be purchased, but the VISA security modules they used could not talk to these devices (they needed DEC VT 100s instead). When the bank wished to establish a zone key with VISA using their security module, they found they had no terminal which would drive it. A contractor obligingly lent them a laptop PC, together with software which emulated a VT100. With this the various internal auditors, senior managers and other bank dignitaries duly created the required zone keys and posted them off to VISA.

However, none of them realised that most PC terminal emulation software packages can be set to log all the transactions passing through, and this is precisely what the contractor did. He captured the clear zone key as it was created, and later used it to decrypt the bank's PIN key. Fortunately for them (and VISA), he did this only for fun and did not plunder their network (or so he claims).

3.
Not all security products are equally good, and very few banks have the expertise to tell the good ones from the mediocre.

4.
Even where one of the better products is purchased, there are many ways in which a poor implementation or sloppy operating procedures can leave the bank exposed.

5.
Cryptanalysis may be one of the less likely threats to banking systems, but it cannot be completely ruled out.

The consequences for bankers

The original goal of ATM crypto security was that no systematic fraud should be possible without the collusion of at least two bank staff [NSM]. Most banks do not seem to have achieved this goal, and the reasons have usually been implementation blunders, ramshackle administration, or both.

The technical threats described in section 3.2.2 above are the ones which most exercised the cryptographic equipment industry, and which their products were designed to prevent. However, only two of the cases in that section actually resulted in losses, and both of those can just as easily be classed as implementation failures.

The main technical lessons for bankers are that competent consultants should have been hired, and much greater emphasis should have been placed on quality control. This is urgent for its own sake: for in addition to fraud, errors also cause a significant number of disputed ATM transactions.

All systems of any size suffer from program bugs and operational blunders: banking systems are certainly no exception, as anyone who has worked in the industry will be aware. Branch accounting systems tend to be very large and complex, with many interlocking modules which have evolved over decades. Inevitably, some transactions go astray: debits may get duplicated or posted to the wrong account.

This will not be news to financial controllers of large companies, who employ staff to reconcile their bank accounts. When a stray debit appears, they demand to see a voucher for it, and get a refund from the bank when this cannot be produced. However, the ATM customer with a complaint has no such recourse; most bankers outside the USA just say that their systems are infallible.

This policy carries with it a number of legal and administrative risks. Firstly, there is the possibility that it might amount to an offence, such as conspiracy to defraud; secondly, it places an unmeetable burden of proof on the customer, which is why the US courts struck it down [JC], and courts elsewhere may follow their lead; thirdly, there is a moral hazard, in that staff are encouraged to steal by the knowledge that they are unlikely to be caught; and fourthly, there is an intelligence failure, as with no central records of customer complaints it is not possible to monitor fraud patterns properly.

The business impact of ATM losses is therefore rather hard to quantify. In the UK, the Economic Secretary to the Treasury (the minister responsible for bank regulation) claimed in June 1992 that errors affected at most two ATM transactions out of the three million which take place every day [B]; but under the pressure of the current litigation, this figure has been revised, firstly to 1 in 250,000, then 1 in 100,000, and lately to 1 in 34,000 [M1].

As customers who complain are still chased away by branch staff, and since a lot of people will just fail to notice one-off debits, our best guess is that the real figure is about 1 in 10,000. Thus, if an average customer uses an ATM once a week for 50 years, we would expect that about one in four customers will experience an ATM problem at some time in their lives.

Bankers are thus throwing away a lot of goodwill, and their failure to face up to the problem may undermine confidence in the payment system and contribute to unpopularity, public pressure and ultimately legislation. While they consider their response to this, they are not only under fire in the press and the courts, but are also saddled with systems which they built from components which were not understood, and whose administrative support requirements have almost never been adequately articulated. This is hardly the environment in which a clear headed and sensible strategy is likely to emerge.

The implications for equipment vendors

Equipment vendors will argue that real security expertise is only to be found in universities, government departments, one or two specialist consultancy firms, and in their design labs. Because of this skill shortage, only huge projects will have a capable security expert on hand during the whole of the development and implementation process. Some projects may get a short consultancy input, but the majority will have no specialised security effort at all. The only way in which the experts' knowhow can be brought to market is therefore in the form of products, such as hardware devices, software packages and training courses.

If this argument is accepted, then our research implies that vendors are currently selling the wrong products, and governments are encouraging this by certifying these products under schemes like ITSEC.

As we have seen, the suppliers' main failure is that they overestimate their customers' level of cryptologic and security design sophistication.

IBM's security products, such as the 3848 and the newer 4753, are a good case in point: they provide a fairly raw encryption capability, and leave the application designer to worry about protocols and to integrate the cryptographic facilities with application and system software.

This may enable IBM to claim that a 4753 will do any cryptographic function that is required, that it can handle both military and civilian security requirements and that it can support a wide range of security architectures [JDKLM]; but the hidden cost of this flexibility is that almost all their customers lack the skills to do a proper job, and end up with systems which have bugs.

A second problem is that those security functions which have to be implemented at the application level end up being neglected. For example, security modules provide a warning message if a decrypted key has the wrong parity, which would let the bank know that someone is experimenting with the system; but there is usually no mainframe software to relay this warning to anyone who can act on it.

The third reason why equipment designers should be on guard is that the threat environment is not constant, or even smoothly changing. In many countries, organised crime ignored ATMs for many years, and losses remained low; once they took an interest, the effect was dramatic [BAB]. In fact, we would not be too surprised if the Mafia were to build a keysearch machine to attack the zone keys used in ATM networks. This may well not happen, but banks and their suppliers should work out how to react if it does.

A fourth problem is that sloppy quality control can make the whole exercise pointless. A supplier of equipment whose purpose is essentially legal rather than military may at any time be the subject of an order for disclosure or discovery, and have his design notes, source code and test data seized for examination by hostile expert witnesses. If they find flaws, and the case is then lost, the supplier could face ruinous claims for damages from his client. This may be a more hostile threat environment than that faced by any military supplier, but the risk does not seem to be appreciated by the industry.

In any case, it appears that implementing secure computer systems using the available encryption products is beyond most organisations' capabilities, as indeed is maintaining and managing these systems once they have been installed. Tackling this problem will require:

The urgency comes from the fact that many companies and government departments will continue to buy whatever products have been recommended by the appropriate authority, and then, because they lack the skill to implement and manage the security features, they will use them to build systems with holes.

This outcome is a failure of the certification process. One would not think highly of an inspector who certified the Boeing 747 or the Sukhoi Su-26 for use as a basic trainer, as these aircraft take a fair amount of skill to fly. The aviation community understands this, and formalises it through a hierarchy of licences - from the private pilot's licence for beginners, through various commercial grades, to the airline licence which is a legal requirement for the captain of any scheduled passenger flight.

In the computer security community, however, this has not happened yet to any great extent. There are some qualifications (such as Certified Information Systems Auditor) which are starting to gain recognition, especially in the USA, but most computer security managers and staff cannot be assumed to have had any formal training in the subject.

There are basically three courses of action open to equipment vendors:

The ideal solution may be some combination of these. For example, a vendor might perform the implementation with its own staff; train the customer's staff to manage the system thereafter; and design the product so that the only maintenance possible is the replacement of complete units. However, vendors and their customers should be aware that both the second and third of the above options carry a significant risk that the security achieved will deteriorate over time under normal budgetary pressures.

Whatever the details, we would strongly urge that information security products should not be certified under schemes like ITSEC unless the manufacturer can show that both the system factors and the human factors have been properly considered. Certification must cover not just the hardware and software design, but also installation, training, maintenance, documentation and all the support that may be required by the applications and environment in which the product is licensed to be used.

The Wider Implications

As we have seen, security equipment designers and government evaluators have both concentrated on technical weaknesses, such as poor encryption algorithms and operating systems which could be vulnerable to trojan horse attacks. Banking systems do indeed have their share of such loopholes, but they do not seem to have contributed in any significant way to the crime figures.

The attacks which actually happened were made possible because the banks did not use the available products properly; due to lack of expertise, they made basic errors in system design, application programming and administration.

In short, the threat model was completely wrong. How could this have happened?

Why the threat model was wrong

During the 1980's, there was an industry wide consensus on the threat model, which was reinforced at conferences and in the literature. Designers concentrated on what could possibly happen rather than on what was likely to happen, and assumed that criminals would have the expertise, and use the techniques, of a government signals agency. More seriously, they assumed that implementers at customer sites would have either the expertise to design and build secure systems using the components they sold, or the common sense to call in competent consultants to help. This was just not the case.

So why were both the threat and the customers' abilities so badly misjudged?

The first error may be largely due to an uncritical acceptance of the conventional military wisdom of the 1970's. When ATMs were developed and a need for cryptographic expertise became apparent, companies imported this expertise from the government sector [C3]. The military model stressed secrecy, so secrecy of the PIN was made the cornerstone of the ATM system: technical efforts were directed towards ensuring it, and business and legal strategies were predicated on its being achieved. It may also be relevant that the early systems had only limited networking, and so the security design was established well before ATM networks acquired their present size and complexity.

Nowadays, however, it is clear that ATM security involves a number of goals, including controlling internal fraud, preventing external fraud, and arbitrating disputes fairly, even when the customer's home bank and the ATM raising the debit are in different countries. This was just not understood in the 1970's; and the need for fair arbitration in paticular seems to have been completely ignored.

The second error was probably due to fairly straightforward human factors. Many organisations have no computer security team at all, and those that do have a hard time finding it a home within the administrative structure. The internal audit department, for example, will resist being given any line management tasks, while the programming staff dislike anyone whose rôle seems to be making their job more difficult.

Security teams thus tend to be `reorganised' regularly, leading to a loss of continuity; a recent study shows, for example, that the average tenure of computer security managers at US government agencies is only seven months [H]. In the rare cases where a security department does manage to thrive, it usually has difficulties attracting and keeping good engineers, as they get bored once the initial development tasks have been completed.

These problems are not unknown to security equipment vendors, but they are more likely to flatter the customer and close the sale than to tell him that he needs help.

This leaves the company's managers as the only group with the motive to insist on good security. However, telling good security from bad is notoriously difficult, and many companies would admit that technical competence (of any kind) is hard to instil in managers, who fear that becoming specialised will sidetrack their careers.

Corporate politics can have an even worse effect, as we saw above: even where technical staff are aware of a security problem, they often keep quiet for fear of causing a powerful colleague to lose face.

Finally we come to the `consultants': most banks buy their consultancy services from a small number of well known firms, and value an `air of certainty and quality' over technical credentials. Many of these firms pretend to expertise which they do not possess, and cryptology is a field in which it is virtually impossible for an outsider to tell an expert from a charlatan. The author has seen a report on the security of a national ATM network switch, where the inspector (from an eminent firm of chartered accountants) completely failed to understand what encryption was, and under the heading of communications security remarked that the junction box was well enough locked up to keep vagrants out!

Confirmation of our analysis

It has recently become clear (despite the fog of official secrecy) that the military sector has suffered exactly the same kind of experiences that we described above. The most dramatic confirmation came at a workshop held in Cambridge in April 93 [M2], where a senior NSA scientist, having heard a talk by the author on some of these results, said that:

This was encouraging, as it shows that our work is both accurate and important. However, with hindsight, it could have been predicted. Kahn, for example, attributes the Russian disasters of World War 1 to the fact that their soldiers found the more sophisticated army cipher systems too hard to use, and reverted to using simple systems which the Germans could solve without great difficulty [K1].

More recently, Price's survey of US Department of Defence organisations has found that poor implementation is the main security problem there [P]: although a number of systems use `trusted components', there are few, if any, operational systems which employ their features effectively. Indeed, it appears from his research that the availability of these components has had a negative effect, by fostering complacency: instead of working out a system's security requirements in a methodical way, designers just choose what they think is the appropriate security class of component and then regurgitate the description of this class as the security specification of the overall system.

The need for more emphasis on quality control is now gaining gradual acceptance in the military sector; the US Air Force, for example, is implementing the Japanese concept of `total quality management' in its information security systems [SSWDC]. However, there is still a huge vested interest in the old way of doing things; many millions have been invested in TCSEC and ITSEC compliant products, and this investment is continuing. A more pragmatic approach, based on realistic appraisal of threats and of organisational and other human factors, will take a long time to become approved policy and universal practice.

Nonetheless both our work, and its military confirmation, indicate that a change in how we do cryptology and computer security is needed, and there are a number of signs that this change is starting to get under way.

A New Security Paradigm?

As more people become aware of the shortcomings of traditional approaches to computer security, the need for new paradigms gets raised from time to time. In fact, there are now workshops on the topic [NSP], and an increasing number of journal papers make some kind of reference to it.

It is clear from our work that, to be effective, this change must bring about a change of focus. Instead of worrying about what might possibly go wrong, we need to make a systematic study of what is likely to; and it seems that the core security business will shift from building and selling `evaluated' products to an engineering discipline concerned with quality control processes within the client organisation.

When a paradigm shift occurs [K2], it is quite common for a research model to be imported from some other discipline in order to give structure to the newly emerging results. For example, Newton dressed up his dramatic results on mechanics in the clothing of Euclidean geometry, which gave them instant intellectual respectability; and although geometry was quickly superseded by calculus, it was a useful midwife at the birth of the new science. It also had a lasting influence in its emphasis on mathematical elegance and proof.

So one way for us to proceed would be to look around for alternative models which we might usefully import into the security domain. Here, it would seem that the relationship between secure systems and safety critical systems will be very important.

A new metaphor

Safety critical systems have been the subject of intensive study, and the field is in many ways more mature than computer security. There is also an interesting technical duality, in that while secure systems must do at most X, critical systems must do at least X; and while many secure systems must have the property that processes write up and read down, critical systems are the opposite in that they write down and read up. We might therefore expect that many of the concepts would go across, and again it is the US Air Force which has discovered this to be the case [JAJP]. The relationship between security and safety has also been investigated by other researchers [BMD].

There is no room here for a treatise on software engineering for safety critical systems, of which there are a number of introductory articles available [C4]. We will mention only four very basic points [M3]:

1.
The specification should list all possible failure modes of the system. This should include every substantially new accident or incident which has ever been reported and which is relevant to the equipment being specified.

2.
The specification should make clear what strategy has been adopted to prevent each of these failure modes, or at least make them acceptably unlikely.

3.
The specification should then explain in detail how each of these failure management strategies is implemented, including the consequences when each single component, subroutine or subassembly of the system itself fails. This explanation must be assessed by independent experts, and it must cover not just technical design factors, but training and operational issues too. If the procedure when an engine fails is to fly on with the other engine, then what skills does a pilot need to do this, and what are the procedures whereby these skills are acquired, kept current and tested?

4.
The certification program must test whether the equipment can in fact be operated by people with the level of skill and experience assumed in the specification. It must also include a monitoring program whereby all incidents are reported to both the equipment manufacturer and the certification body.

These points tie in exactly with our findings (and with the NSA's stated experience). However, even a cursory comparison with the ITSEC programme shows that this has a long way to go. As we mentioned in the introduction, no-one seems so far to have attempted even the first stage of the safety engineering process for commercial cryptographic systems.

As for the other three stages, it is clear that ITSEC (and TCSEC) will have to change radically. Component-oriented security standards and architectures tend to ignore the two most important factors, which are the system aspect and the human element; in particular, they fail to ensure that the skills and performance required of various kinds of staff are included, together with the hardware and software, in the certification loop.

The competing philosophies

Within the field of critical systems, there are a number of competing approaches. The first is epitomised by railway signalling systems, and seeks either to provide multiple redundant interlocks or to base the safety features on the integrity of a kernel of hardware and software which can be subjected to formal verification [CW].

The second is the aviation paradigm which we introduced at the beginning of this article; here the quality engineering process is based on constant top level feedback and incremental improvement. This feedback also occurs at lower levels, with various distinct subsystems (pilot training, maintenance, airworthiness certification, traffic control, navigational aids, ...) interacting in fairly well understood ways with each other.

Of these two models, the first is more reductionist and the second more holist. They are not mutually exclusive (formal verification of avionics is not a bad thing, unless people then start to trust it too much); the main difference is one of system philosophy.

The most basic aspect of this is that in signalling systems, the system is in control; if the train driver falls asleep, or goes through a red light, the train will stop automatically. His task has been progressively deskilled until his main function is to see that the train stops precisely at the platform (and in some modern railways, even this task is performed automatically, with the result that driverless trains are beginning to enter service).

In civil aviation, on the other hand, the pilot remains firmly in command, and progress has made his job ever more complex and demanding. It was recently revealed, for example, that Boeing 747 autopilots have for 22 years been subject to erratic failures, which can result in the plane starting to roll.

Boeing's response was blunt: autopilots `are designed to assist and supplement the pilot's capabilities and not replace them', the company said [CR]. `This means our airplanes are designed so pilots are the final control authority and it means that a well trained crew is the first line of safety.'

The computer security implications

Both the railway and airline models find reflections in current security practice and research. The former model is dominant, due to the TCSEC/ITSEC emphasis on kernelisation and formal methods. In addition to the conventional multilevel secure evaluated products, kernelisation has been used at the application layer as well [A2] [C5].

Nonetheless, we must consider whether this is the right paradigm to adopt. Do we wish to make the computer security officer's job even more mechanical, and perhaps automate it entirely? This is the direction in which current trends seem to lead, and if our parallel with signalling systems is accurate, it is probably a blind alley; we should follow the aviation paradigm instead.

Another analogy is presented in [BGS], where it is argued that the traditional centralised model of security is like the old communist approach to economic management, and suffers from the same limitations. The authors there argue that to cope with a world of heterogeneous networks in which no single security policy is able to predominate, we need an infrastructure which enables information owners to control and trade their own property, rather than trusting everything to a centralised administrative structure.

This analogy from economics would, if developed, lead to somewhat similar conclusions to those which we draw from comparing railway signals with air traffic control systems. No doubt many other analogies will be explored over the next few years; the key point seems to be that, to be useful, a security metaphor should address not just the technical issues, but the organisational ones as well.

Conclusions

Designers of cryptographic systems have suffered from a lack of information about how their products fail in practice, as opposed to how they might fail in theory. This lack of feedback has led to a false threat model being accepted. Designers focussed on what could possibly go wrong, rather than on what was likely to; and many of their products are so complex and tricky to use that they are rarely used properly.

As a result, most security failures are due to implementation and management errors. One specific consequence has been a spate of ATM fraud, which has not just caused financial losses, but has also caused at least one miscarriage of justice and has eroded confidence in the UK banking system. There has also been a military cost; the details remain classified, but its existence has at last been admitted.

Our work also shows that component-level certification, as embodied in both the ITSEC and TCSEC programs, is unlikely to achieve its stated goals. This, too, has been admitted indirectly by the military (at least in the USA); and we would recommend that the next versions of these standards take much more account of the environments in which the components are to be used, and especially the system and human factors.

Most interesting of all, however, is the lesson that the bulk of computer security research and development activity is expended on activities which are of marginal relevance to real needs. A paradigm shift is underway, and a number of recent threads point towards a fusion of security with software engineering, or at the very least to an influx of software engineering ideas.

Our work also raises some very basic questions about goals, and about how the psychology of a design interacts with organisational structure. Should we aim to automate the security process, or enable it to be managed? Do we control or facilitate? Should we aim for monolithic systems, or devise strategies to cope with diversity? Either way, the tools and the concepts are becoming available. At least we should be aware that we have the choice.

Acknowledgement: I owe a significant debt to Karen Sparck Jones, who went through the manuscript of this paper and ruthlessly struck out all the jargon. Without her help, it would have been readable only by specialists.

Bibliography

A1
D Austin, ``Marking the Cards'', in Banking Technology, Dec 91/Jan 92, pp 18 - 21

A2
RJ Anderson, ``UEPS - A Second Generation Electronic Wallet''. in Computer Security - ESORICS 92, Springer LNCS 648, pp 411 - 418

B
M Buckler MP, letter to plaintiff's solicitor, 8 June 1992

BAB
``Card Fraud: Banking's Boom Sector'', in Banking Automation Bulletin for Europe, Mar 92, pp 1 - 5

BAN
M Burrows, M Abadi and RM Needham, `A Logic of Authentication', DEC SRC Research Report 39

BB
``Cash Dispenser Security'', Barclays Briefing (press release) 12/9/92

BGS
JA Bull, L Gong, K Sollins, ``Towards Security in an Open Systems Federation'', in Proceedings of ESORICS 92, Springer LNCS 648 pp 3 - 20

BMD
A Burns, JA McDermid, JE Dobson, `On the meaning of safety and security', University of Newcastle upon Tyne Computer Laboratory TR 382 (5/92)

C1
A Collins, ``Bank worker guilty of ATM fraud'', in Sunday Times, 22 Mar 1992

C2
A Collins, ``The Machines That Never Go Wrong'', in Computer Weekly, 27 June 1992, pp 24 - 25

C3
D Coppersmith, ``The Data Encryption Standard (DES) and its strength against attacks'', IBM Thomas J Watson Research Center technical report RC 18613 (81421), 22 December 1992

C4
J Cullyer, ``Safety-critical systems'', in Computing and Control Engineering Journal 2 no 5 (Sep 91) pp 202 - 210

C5
B Christianson, ``Document Integrity in CSCW'', in Proc. Cambridge Workshop on Formal Methods (1993, to appear)

CR
Boeing News Digest, quoted in usenet newsgroup `comp.risks' 14 no 5 (29 April 1993)

CW
J Cullyer, W Wong, ``Application of formal methods to railway signalling - a case study'', in Computing and Control Engineering Journal 4 no 1 (Feb 93) pp 15 - 22

DP
DW Davies and WL Price, `Security for Computer Networks', John Wiley and Sons 1984.

E
J Essinger, `ATM Networks - Their Organisation, Security and Future', Elsevier 1987

GO
G Garon and R Outerbridge, ``DES Watch: An examination of the Sufficiency of the Data Encryption Standard for Financial Institution Information Security in the 1990's, in Cryptologia, XV, no. 3 (July 1991) pp 177 - 193

H
HJ Highland, ``Perspectives in Information Technology Security'', in Proceedings of the 1992 IFIP Congress, `Education and Society', IFIP A-13 vol II (1992) pp 440 - 446

ITSEC
`Information Technology Security Evaluation Criteria', June 1991, EC document COM(90) 314

J1
RB Jack (chairman), `Banking services: law and practice report by the Review Committee', HMSO, London, 1989

J2
K Johnson, ``One Less Thing to Believe In: Fraud at Fake Cash Machine'', in New York Times 13 May 1993 p 1

JAJP
HL Johnson, C Arvin, E Jenkinson, R Pierce, ``Integrity and assurance of service protection in a large, multipurpose, critical system'' in proceedings of the 15th National Computer Security Conference, NIST (1992) pp 252 - 261

JC
Dorothy Judd v Citibank, 435 NYS, 2d series, pp 210 - 212, 107 Misc.2d 526

JDKLM
DB Johnson, GM Dolan, MJ Kelly, AV Le, SM Matyas, ``Common Cryptographic Architecture Application Programming Interface'', in IBM Systems Journal 30 no 2 (1991) pp 130 - 150

K1
D Kahn, `The Codebreakers', Macmillan 1967

K2
TS Kuhn, `The Structure of Scientific Revolutions', Chicago 1970

L1
B Lewis, ``How to rob a bank the cashcard way'', in Sunday Telegraph 25th April 1992 p 5

L2
D Lane, ``Where Cash is King'', in Banking Technology, October 1992, pp 38 - 41

M1
S McConnell, ``Barclays defends its cash machines'', in The Times, 7 November 1992

M2
R Morris, invited lecture given at Cambridge 1993 formal methods workshop (proceedings to appear)

M3
JA McDermid, ``Issues in the Development of Safety Critical Systems'', public lecture, 3rd February 1993

MB
McConville & others v Barclays Bank & others, High Court of Justice Queen's Bench Division 1992 ORB no.812

MBW
McConville & others v Barclays Bank & others cit, affidavit by D Whalley

MM
CH Meyer and SM Matyas, `Cryptography: A New Dimension in Computer Data Security', John Wiley and Sons 1982.

N
I Newton, `Philosophiae Naturalis Principia Mathematica', University of California Press 1973

NSM
`Network security Module - Application Developer's Manual', Computer Security Associates, 1990

NSP
New Security Paradigms Workshop, 2-5 August 1993, proceedings to be published by the ACM.

P
WR Price, ``Issues to Consider When Using Evaluated Products to Implement Secure Mission Systems'', in Proceedings of the 15th National Computer Security Conference, National Institute of Standards and Technology (1992) pp 292 - 299

R
RA Rueppel, ``Criticism of ISO CD 11166 Banking: Key Management by Means of Asymmetric Algorithms'', in Proceedings of 3rd Symposium of State and Progress of Research in Cryptography, Fondazione Ugo Bordoni, Rome 1993

RM
R v Moon, Hastings Crown Court, Feb 92

RSH
R v Stone and Hider, Winchester Crown Court July 1991

S
A Stone, ``ATM cards & fraud'', manuscript 1993

SSWDC
L Sutterfield, T Schell, G White, K Doster and D Cuiskelly, ``A Model for the Measurement of Computer Security Posture'', in Proceedings of the 15th National Computer Security Conference, NIST (1992) pp 379 - 388

TCSEC
`Trusted Computer System Evaluation Criteria, US Department of Defense, 5200.28-STD, December 1985

VSM
`VISA Security Module Operations Manual', VISA, 1986

W1
G Welchman, The Hut Six Story, McGraw-Hill, 1982

W2
MA Wright, `Security Controls in ATM Systems', in Computer Fraud and Security Bulletin, November 1991, pp 11 - 14

W3
K Wong, `Data security - watch out for the new computer criminals'', in Computer Fraud and Security Bulletin, April 1987, pp 7 - 13


About this document ...

Why Cryptosystems Fail

This paper was published by the ACM in 1st Conf.- Computer and Comm. Security '93 -- 11/93 -- VA, USA, ISBN 0-89791-629-8. Permission to copy all or part of this material is given provided that the copies are not made or distributed for direct commercial advantage, the ACM copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Association for Computing Machinery. To copy otherwise, or to republish, requires a fee and/or specific permission. 93/0011 ... \$1.50

This document was generated using the LaTeX2HTML translator Version 98.1p1 release (March 2nd, 1998)

Copyright © 1993, 1994, 1995, 1996, 1997, Nikos Drakos, Computer Based Learning Unit, University of Leeds.

The command line arguments were:
latex2html -split 0 wcf.tex.

The translation was initiated by Ross Anderson on 1999-06-08



Ross Anderson
1999-06-08