Security Engineering - Part 2

This page contains new material and links for part 2 of my book `Security Engineering'. The fourteen chapters there cover a range of security technologies from multilevel secure systems through alarms and tamper-resistance to emission security and copyright management mechanisms. For reference, here are the table of contents and the bibliography. The errata for the print version are at the end.

New material and links

Privacy: In chapter 8, I talk about privacy risks, and in particular the fact that most privacy compromises result from the abuse of authorised access by insiders. This continues to be a growing issue. A recent example is a scare in the UK about tax clerks looking up the files of celebrities, and even using their systems to harrass ex-spouses.

This is a hard problem, and not one that was created by computers. In the old days, if you kept your bank account at the Oxford branch, then only the Oxford staff had easy access to it. This made it harder for a London-based private eye to get your account details via his source in the bank's Kensington branch, but more or less ensured that there would be occasional abuse by friends (and enemies) of customers who worked at the bank. This was controlled by cultural means - old fashioned banks had strong value systems and imbued their staff with a strong sense of what was and what was not `done'. A modern system in which two thousand minimum-wage call-centre staff in Newcastle deal with enquiries from Oxford (and everywhere else) makes local abuse less likely (except for customers who live in Newcastle), but makes it much more likely that there will be someone selling information to private detectives.

Technical solutions I describe include the design of compartmented mode systems, where operational staff have access to only a small part of the whole database , and inference control, which you can use to see to it that analysts have access to statistical information only. A radical, non-technical solution is to farm out the work overseas. For example, some US medical records are transcribed from doctors' dictaphones to case files by clerks in India.

Nuclear Command and Control: More has emerged about command and control since my book went to press. According to US doctrine, the detonation of a nuclear weapon requires three things: authorisation, environment and intent. The authorisation will typically be a code of the type discussed in chapter 12; `environment' might mean sensing a unique environmental condition, such as the zero gravity following the release of an air-drop bomb or the 20,000 g experienced by an artillery shell when it is fired; and `intent' means a command from the officer in charge of the launch. This too is heavily coded to ensure that background noise won't either prevent a weapon being armed deliberately, or allowing it to be armed by accident. That at least is the theory. As for the practice, it seems that for many years the USA used the authorisation code of all zeros for convenience; while other countries, such as Pakistan, have had a very laid-back approach that come commentators consider to be dangerously irresponsible.

Hardware Security: In chapter 14, we talk about a number of ways of probing out information from smartcards and secure microcontrollers. The state of the art has advanced quite a lot since the book went to press, mostly through the development of optical probing and optical fault induction techniques.

Meanwhile, outside the attack labs, the main hardware security threat remains the fact that people throw away hard disks containing all sorts of sensitive information. An interesting study has been published by Simson Garfinkel, who collected 158 used drives, found that 129 of them worked, and extracted significant personal information from 49 of them. One even had a year's worth of transactions with account numbers from an ATM in Illinois. Peter Gutmann has a good paper on this; as disk drives become `smarter' the only way to be sure that data have been deleted from a derice is to physically destroy it.

Intelink: Chapter 16 refers to secure intranets. There is a book Top Secret Intranet (How U.S. Intelligence Built INTELINK - The World's Largest, Most Secure Network by Frederic Thomas Martin. It also mentions Venona, for which (as I described in the page for section 1), there is now a home page and at least one book.

Furby: I reported on p 310 that the NSA banned Furby toys in the belief that they could remember and randomly repeat things said in their presence. The commander at the Norfolk Naval Shipyard ordered anyone seeing a Furby to seize it and its owner as a security violation. But subsequently the toy's maker claimed that it had been libelled; that it contained no recording device and that its utterances, although random, were preprogrammed. When even the NSA can't tell whether a common object contains a recording device, that should give some pause for thought. And as more and more everyday objects acquire both computational and communications cability, where will it end? I have been asked by government contractors if we can think of any way, short of physical search, of determining whether an individual passing through a door is carry some kind of recording equipment; we've not come up with any ideas that stood scrutiny.

t00lz: On page 369 I mentioned as a source of exploit tools. This site appears to be on the wane; try sites like or


Thanks to Peter Chambers, Nick Drage, Ben Dougall, Paul Gillingwater, David Håsäther, Konstantin Hyppönen, Garry McKay, Avi Rubin, Nick Volenec, Randall Walker, Keith Willis, Stuart Wray and Stefek Zaba.

Return to Ross Anderson's home page