We've been subjected to sales pitches for smart ID cards containing encoded fingerprints or iris scans, universal CCTV surveillance, government management of airport security, knowledge-based data mining of passenger lists, and significantly increased police control of Internet service providers. Some of the salesmen help propagate juicy urban myths, such as the claim that al-Qaida hides its electronic communications in pictures on pornographic web sites. The image of the mad mullah as cryptopornographer may be irresistible to a certain type of newspaper - but what's the truth?
I've been asked for comments on security engineering questions by quite a few journalists. Here, I gather together some of the points that I've made, and offer a few observations on why some of the common reactions to the attacks are not appropriate.
I'm one of the people who's supposed to understand security technology
a little. My day job is
leading the security group
at the Cambridge University
Computer Laboratory, where I'm on the faculty. Our professional
interests cover things like cryptography, biometrics and smartcards. I
also teach courses on security to undergraduates, and the lecture
notes I developed for these courses are available as a book. If this had
been read more widely, we would hear much less rubbish about security.
In the wake of the tragedy of the 11th September, we saw a publicity
stampede by security vendors, consultancy companies, and intelligence
agencies hustling for money and power. This seemed to be in rather bad
taste, and showed a lack of respect for the dead. Now that the world
is getting back to something like business as usual, it's time to
debunk the rubbish that's being peddled.
Ross Anderson
Newspapers commonly describe shocking events as `unprecedented', but
this is rarely so. The history of piracy offers some intriguing
parallels. For example, after Ferdinand and Isabella drove the last
Muslims from Spain in 1492, the hundreds of thousands of educated,
unemployed refugees dumped in North Africa supported piracy against
European shipping. They not only took cargoes, but also sold captured
Christian sailors as slaves. (The pirates viewed this as `jihad';
European governments saw it as a revolting crime.) The equivalent of
the World Trade Center attack was the seizure, near Elba in 1504, by
the elder Barbarossa, of Pope Julius II's two treasure galleons, using
only four rowing boats of fanatical Mujahideen.
This exploit made Barbarossa a hero to Muslim youth. Ferdinand led a
punitive expedition, blockaded North Africa and sacked its cities.
After his death, piracy flared up again ... and again. Then, as now,
it was usually convenient for the rulers of states north of the
Mediterranean to compromise with those to the south, and turn a blind
eye to many violent substate activities that are nowadays called
`terrorism' but were then known as piracy or privateering. Such
semi-privatised warfare was conducted later in the Caribbean and
elsewhere. The problem was not `fixed' until governments ended
privateering, in effect nationalising warfare, by the Declaration of
Paris in 1856.
For the intervening dozen generations, pirate bands repeatedly grew to
be a threat to states, then either got destroyed in battle, got
co-opted by governments, or even set up their own states. There is
much history on what worked and what didn't, and how piracy (and its
suppression) got entangled with the other great issues such as slavery
and colonialism. (The standard excuse for a colonial adventure was
the suppression of piracy and slavery.) There were many pure
`terrorist' incidents, such as when unpopular Caribbean governors got
swung from their flagpoles and their towns razed. There were also
romantic aspects: successful pirates became national heroes.
Since the six-day war in 1967, the old destructive pattern appears to
be re-emerging. Air piracy may have replaced sea piracy as the means
of jihad, but neither the West nor the Muslim world should be keen to
slip back into the old pattern of low-grade proxy conflict. Last time
round, this kept the Muslim world impoverished, leading ultimately to
its dismemberment and colonisation. It was not risk-free to the
northen powers either. When a storm helped Pasha Hassan defeat Charles
V in the Battle of Algiers in 1541, so many Spaniards were captured
that the price of slaves collapsed: a Christian was `scarcely fair
exchange for an onion'. Oh, and some `Christian' powers also harboured
pirates. In one case, the King of Spain sent a fleet to crush one of
these upstart rogue states. The loss of the resulting sea battle, in
1588, led ultimately to the collapse of the Spanish empire.
(The above points were picked up in an article in the Daily Telegraph.)
Punitive reprisals didn't provide a magic solution to the problem of
piracy/terrorism, but what about technology? One of the `solutions'
that people have been hawking since the 11th September is biometrics -
devices that can recognise people. The Malaysian government is
apparently going to issue all its citizens with an ID card that
contains a machine readable fingerprint; it's been suggested that iris
scanning be used to identify all UK citizens; and Visionics has seen
its stock price treble after it held out a technology to scan the
faces of people in crowds for known terrorists.
The Visionics system has since been thoroughly debunked in an article
in the New York Times which revealed that their system, used in some
UK towns, relies on the placebo effect; it manages to reduce crime
because criminals believe it works, even though from the technical
viewpoint it doesn't really. (I already remarked on this in my book
at page 265.)
Biometrics suffer from a number of problems. These include high error
rates; a typical fingerprint scanner will get it wrong about 1% of the
time, and most other technologies are worse. The result is that in
many applications the false alarm rate is unacceptable. In mass
screening, there will be so many more false alarms than real ones that
the `boy who cried wolf' effect will discredit the system in the eyes
of the public and of its operators.
Scanners can be tuned to give fewer false alarms, but only at the most
of more missed alarms. How this works out depends on the application;
in banking, for example, the requirement is for a false alarm
(`insult') rate of 0.01% and a missed alarm (`fraud') rate of 1%;
currently the only technology that can meet this is iris
scanning. (That's held back by marketing issues: people are
reluctant to gaze into a scanner in case it malfunctions and blinds
them.)
Other factors include scale (the more subjects you have, the more
likely you are to have two subjects the system can't tell apart),
forgery (is the subject using a rubber fingerpad, or wearing a contact
lens with someone else's iris printed on it?), standards (if one
particular biometric becomes the standard, then the Mafia will know
yours from the time you ate in one of their retaurants and used it to
authorise a credit card transaction), and social exclusion
(fingerprint scanners work less well with the elderly and with manual
workers, while iris scanners work less well with people whose eyes are
dark, such as Arabs). Oh, and there are religious objections too: see
Revelation 13:16-18. You don't want to have to fight both the Muslim
fundamentalists and the Christian fundamentalists at the same time, do
you?
The executive summary is that biometric systems are generally less
suited for mass application. They can work well for attended
applications involving small numbers of people. For example, I tried
to get iris scanners for entry control to our new computer lab
building. It would have meant I didn't have to fumble for a door key
while carrying an armful of books - just look at the lock. (The reason
we didn't buy it is that the currently available products are
expensive and haven't been integrated with commodity entry control
software. However, give the market a year or two, and that should get
fixed.)
In my book, the
chapter on biometrics covers the engineering issues of using
handwritten signatures, face recognition, fingerprints, iris codes and
voice recognition, as well as system-level issues and the things that
commonly go wrong. I don't know of any other sources that give a
warts-and-all view of the whole field.
Managing the tradeoff between false alarms and missed alarms is not
just a fundamental problem for biometric systems, but is one of the
most pervasive problems in security engineering. If your metal
detector is too sensitive, you end up hand-searching most of your
passengers; if it isn't, it becomes too easy to get a gun on board.
And now that you can buy box cutters that contain less ferrous metal
than a typical belt buckle, things are getting tricky.
The false alarm rate not only sets limits on what can be achieved with
metal detectors and biometric identification systems. It leaves many
perimeter defences extremely vulnerable to opponents who induce false
alarms deliberately; the smart way of sneaking a bomb into an airport
at night is to rattle the fence a few times until the guards decide
that the alarm is malfunctioning and stop coming round to look. (Here is
a sample chapter of my book that describes this.) It also limits what
can be done with wiretapping. If, prior to September 11th, the NSA had
been able to scan all transatlantic telephone calls automatically for
the simultaneous occurrence of the words `bomb', `al-Qaida' and `bin
Laden', they might have filtered out a quantity small enough to listen
to; but the FBI now tells us that the terrorists used simple open
codes such as referring to their chief as `the director'. If the scope
of the system had been increased to pick out everyone speaking in a
guarded way, it would most likely have selected so many calls that
no-one could have listened to them.
The tradeoff between false alarms and missed alarms was first explored
systematically by radar engineers during World War 2, and quite a lot
is known about it. It is the fulcrum on which modern electronic
warfare rests. It's not enough just to combine multiple sources of
information, as people trying to sell `AI data mining' systems
suggest; combining sensors usually reduces one of the two types of
alarm but increases the other. Suppose you have an air defence battery
with both radar and infrared sensors. If you launch a missile only
when both agree there is a plane in range, then there will be fewer
false alarms, but more missed alarms; if you launch on either signal,
it will be the other way round. (And if the enemy is smart, and your
system is too sensitive, he will bombard you with false alarms and
blind you.) The mathematics of all this isn't too difficult, but
ignorance of it leads to a huge amount of money being wasted.
It's not just money at stake, but civil liberties. It is now fifty
years since the UK citizens abolished mandatory ID cards, and for most
of these fifty years our civil service has been scheming to
reintroduce them, using a wide variety of excuses (access to
computerised health records, smartcards for secure digital signatures
for electronic commerce, unified access to e-government services, you
name it). So one of the first government reactions to the atrocities
was to float a plan for ID cards; in the USA, both Larry Ellison and
Scott McNealy have called for an ID card system. Fortunately, in both
our countries, the ID industry appears to have been beaten off - for
now.
In addition to the well known political arguments against identity
cards, which you can find at sites like Privacy
International, there are quite a few security engineering issues
that are often overlooked.
Some of the most subtle have to do with naming. An ID card system
might seem to be just a means of assigning a unique name to everyone
in the world: their country of residence, followed by their name and
date of birth, and/or the number of their ID card or passport. Yet a
systems engineer knows that the purpose of a name is to facilitate
sharing, and this gives us a useful conceptual framework to analyse
exactly what we're trying to achieve.
Suppose that we wish to ensure that some information from the French
secret police, e.g. that the Algerian student Achmed Boulihia was said
by an informant in 1992 to be a Muslim fundamentalist who idolised the
assassins of Anwar Sadat, gets shared with an airline checkin agent to
whom the same man presents himself in America in December 2001. What
can go wrong?
Well, names can change in all sorts of ways. There are cultural issues
(Arab men often change their names to celebrate the birth of their
first son) and linguistic issues (Achmed might now spell his name with
a different romanisation of the Arabic, such as Ahmed Abu Lihya).
There are lots of issues like these, and people who design systems
that deal with the same individuals over periods of decades (such as
land registries) have to do a lot of work to cope with them. So the
checking agent would probably prefer to rely on a simple unique
identifier such as a social security number. But here again he runs up
against funny foreign ways. In Germany, for example, ID card numbers
are a document number rather than a person number, and so you can
change your ID number by losing your ID card and getting another one.
My book goes into a lot more detail, but, basically, identity and
registration systems are extremely hard to design even when everyone
involved is honest and competent. (And there are many parts of the
world where you can buy a genuine passport in any old name for a few
hundred dollars.)
Identity theft is another issue. It appears, for example, that one of
the hijackers used a passport stolen from a Saudi in the USA five
years previously. This is a good example of what security engineers
call the revocation problem - how do you get bad news quickly to
people who need to know it? The standard example of a global
revocation system is given by the lists of stolen credit cards that
circulate round the world's banks and merchant terminals. This is big,
and complex, and only works really well where the transactions are
local or the amounts involved are large. It is just not economic to
inform every banking terminal in Russia whenever a credit card is
reported missing in Portugal. And if the US airline system can't pick
up on a theft reported in the USA to the US authorities, what chance
is there for something that works well internationally?
In short, there are inherent limitations due to system-level issues of
scale and complexity. These limitations are very difficult to conceal
from the bad guys, as they can try various types of identity document
and see which of them set off what alarms. In fact, there's a whole
criminal industry doing this, and using identity theft for all sorts
of wicked purposes - from obtaining credit cards, through student loan
fraud, to registering mobile phones in other people's names to feed
premium-rate scams. So if you are building a system to carry
intelligence data to airport checkin terminals, it's not all that
bright an idea to make it depend completely on such an unreliable
foundation as the international system of government-issue photo-ID.
(In any case, you have much more serious things to worry about, such
as whether the intelligence service providing your data is trying to
blacklist its political enemies, whether its informant was mistaken or
lying, and whether the checkin clerk works for the terrorist group's
counterintelligence department. The error rate tradeoffs for all this
get complicated and unpleasant enough.)
There is also the issue of whether binding humans strongly to unique
names is solving the right problem anyway. In the UK, our experience
is that the IRA use people who are unknown to authority for major
operations. Even a perfectly dependable global ID/passport system
would be of limited help against such attacks.
Since the early 1990s, various security and intelligence agencies have
been arguing that cryptography would be a potent weapon for
terrorists, and asking for laws that would give them control of
authentication on the internet. They painted a lurid picture of
terrorists and child pornographers escaping justice because they could
encrypt their emails. People involved in IT have generally seen this
as a shameless attempt to creat a huge new bureaucratic empire that
would get in everyone's way. Civil libertarians went further and
built up the whole issue as a Manichean struggle between privacy and
surveillance: encryption, some said, was the only way the individual
could protect himself against overweening state surveillance.
In my own writings on the topic, such as this
paper from 1995, I argued that encryption, law enforcement and
privacy have little to do with each other. The real problem in police
communications intelligence is not understanding the traffic, but
selecting the messages of interest. Only a stupid criminal will
encrypt his traffic, as it will bring him to the attention of
authority; the technologies the police ought to have been worried
about are not `encryption on the Internet' but things like prepaid
mobile phones. Although senior policemen would agree in private, none
would dare buck the official line (that I was foolish, wicked, naive,
technologically clueless, and everything in between).
The events since September 11th have proved my view to be right, and
the agencies to be wrong. Al-Qaida's communications security was fit
for purpose, in that their attack took the agencies completely by
surprise. It did not involve encryption. It did involve hiding
messages - among the zillions of innocuous emails that pass across the
Internet each day. Two separate FBI reports stated that the hijackers
used throwaway webmail accounts, accessed from public libraries.
It is disgraceful (but predictable) that the agencies are using the
tragedy for bureaucratic empire building, by reviving all the old
policies of crypto control that everyone thought dead and buried. If
legislators allow themselves to be panicked into giving them their
way, the agencies will need huge budget increases, and acquire the
ability to push lots more people around. It is much less clear that
such policies will do anything of real value in the fight against
al-Qaida. Agencies that react to visible policy failure by demanding
that their old, failed policies be supported by even more money, and
even more coercive power, should be disbanded and replaced with new
organisations managed by new people.
There has been much press comment on poor airport security, and a lot
of effort is going into `tightening' it. Of course, it was clear from
the start to security professionals that this was solving the wrong
problem. Preventing similar hijackings in the future is a matter for
aircraft operating procedures, and the problem is now probably fixed -
regardless of anything that governments do. Thanks to the huge
publicity given to the attacks, it will be a long time before a pilot
opens the cockpit door and rushes out, just because he's heard that
one passenger is disembowelling another.
In the USA, the government is taking over responsibility for airport
security. This may not be an entirely good thing. The standard
government-security model works via perimeter controls: people have
clearances, objects have classifications and places are in zones.
These are related by a set of access rules and perimeter control (such
as no Secret document may be left on a desk overnight' and `only an
officer of rank colonel or above can declassify a Top Secret
document'). This approach gets used in a rough and ready way in
airports (`only staff with a red badge, and ticketed passengers who
have gone through metal detectors, can go airside'), but implementing
it any more thoroughly is likely to be problematic.
In a large airport, over ten thousand employees of over a thousand
companies might have airside badges. Many of these have tools that can
be used as weapons; many others are minimum-wage staff with at best
cursory background checks. How do you stop a casual restaurant cleaner
from stealing a chisel from a maintenance carpenter?
The standard government-security approach would be to divide the
airport into a number of zones, in which there would be clear
restrictions on personnel and objects, and between which there would
be screening. I expect this would involve a large investment in
redesigning the buildings. An alternative would be to use only cleared
staff airside, or to limit and subdivide the airside zone by screening
passengers and hand baggage at the departure gate - but both of these
would also cost more. There is by now a large literature on the
practical problems of clearances, managing security compartments and
so on; I discuss some of the issues in chapters 7-8 of my book.
Bruce Schneier has collected some useful comments and links on airport
screening issues in Cryptogram.
A number of the points he makes, such as that airports go for visible
protective measures (such as bullying customers) rather than effective
ones (such as positive matching of bags to passengers at the loading
gate) have long been obvious to people in the `trade'. But why does so
much money get wasted?
Many security systems fail for reasons that are more economic than
technical. The people responsible for protecting a system are not
usually the people who get hurt when the protection fails. Often this
is because the system has changed subtly since it was first designed.
During the 1980s, banks connected up their ATM systems into networks,
and now instead of just preventing fraud by customers their security
people have to worry about lawsuits from other banks. This makes
everyone defensive and leads to customers who report a fraud being
told they must be mistaken or lying. This in turn makes bank staff
realise that they can swindle the system without getting caught, which
in turn causes fraud to increase.
Often, security should not just be discussed in the language of
ciphers, seals and biometrics, but of asymmetric information, adverse
selection and moral hazard - in short, of microeconomics. I have
written a paper on
these issues, which are one of our current research topics. They are
also discussed at length in my book.
Things also go wrong within organisations when tensions arise between
the interests of staff and those of their employers. This leads, for
example, for risk reduction measures being turned into due diligence
routines: `so long as I do X, Y and Z, I won't get fired'. There are
also adverse selection effects; a notorious example is that
individuals who are particularly risk-averse often seek jobs in the
public sector. The result, as economists have long known, is that most
organizations (and especially public sector ones) are excessively
cautious; they take many fewer risks than a rational economic agent
would in similar circumstances. There's a nice article on this by John Adams, and
an analysis
on how the `tightening' of security in the USA following the TWA crash
a few years ago probably cost about 60 lives a year by driving airline
passengers to drive instead.
Although it might seem strange for a security engineer to say this,
our societies spend far too much on `security'. Increasing this still
further in knee-jerk response to terrorist incidents will hinder
economic growth - and is letting the other side dictate the game. The
response to terrorist incidents must include much better risk
management: better information on the threats and on what the
available countermeasures can actually achieve, getting the
mathematics of the risk models right, diverting protection resources
from show to substance and from due diligence to risk reduction, and -
finally - educating the press and public about what can realistically
be done, and what can't.
The USA may well not find a simple military solution to the Islamic
fundamentalism problem, any more than Spain could find a lasting
military solution to the Barbary pirate problem in 1504. But, as an
academic, I do believe we know in principle how to fix those secondary
problems that arise from ignorance. That's after all why I wrote my
book.
Although there are decent books on some security tools, such as
burglar alarms, access controls and cryptography, there has been
almost nothing on how to use them in real systems. But most security
systems don't fail because the protection mechanisms were weak, but
because the designers protected the wrong thing, or protected the
right thing in the wrong way. If an extra one or two percent of gross
world product is going to be spent on security engineering over the
next few years, that amounts to absolutely colossal waste. Reading my
book can help you avoid some of it.
`Security
Engineering - a Guide to Building Dependable Distributed Systems'
gives a fairly detailed tutorial on a number of security applications,
such as automatic teller machines, burglar alarms, copyright
protection mechanisms and electronic warfare systems. It uses these to
introduce a wide range of security technologies, such as biometrics,
tamper-resistant electronics and authentication protocols. This
material is then used to bring out the system-level engineering
issues, such as false alarm rates, protection versus resilience,
naming, security usability, reliability, and assurance.
Although the book grew out of notes for security courses I teach
at Cambridge, I've rewritten the material to ensure it's accessible to
the working professional. The people on whom I tested the manuscript
included not just security geeks but doctors, lawyers, accountants,
retired soldiers and airmen, an economics professor, a software
company CEO, and even my mum (a retired pharmacist). Check out the
following:
Finally, here is the book's
home page, which has ordering information.
Introduction
Historical Precedents
Biometrics
False Alarms versus Missed Alarms
ID Cards
Control of Cryptography
Airport Security
Economics and Security
My book