cover

Security Engineering 2 - Notes

This page accumulates further notes on the second edition of my book Security Engineering - A Guide to Building Dependable Distributed Systems. As relevant further material comes along that could be useful to students studying using my book and engineers using it as a reference, I link to it here.

Chapter 2: I asked at p 28 whether the common bank strategy of asking users to parse URLs as a precaution against phishing discriminates against women. Tyler Moore and I did the experiment; the answer is yes. I presented this at SOUPS 2008 in my keynote talk; here are the slides. It was also discussed at SHB.

Chapter 3: There is a cool paper by Srdjan Capkun and his students on how to do relay attacks on remote key entry devices; this fills out a lot of the details of how car locks work. Earlier, the Keeloq device widely used in remote key entry in cars was cryptanalysed: see the scientific paper and the press coverage. So have the mechanisms used in the Mifare Classic card; here's a paper on how to attack them, and the story of how the card vendor tried to suppress publication using the courts.

Chapter 5: The vulnerability in MD5 became a lot less theoretical at the end of 2008 when David Molnar and colleagues demonstrated at the Chaos Communication Congress how to create a rogue CA that's accepted as genuine by both IE and Firefox and thus allows arbitrary middleperson attacks on SSL/TLS. And the problems with random number generators aren't limited to crypto toolkits: here is the story of how Microsoft screwed up royally with the software it wrote to give users a free choice of browser in response to a competition-law jugdement.

Chapter 6: See Peter Swire's ID divide paper for an analysis of naming problems in the context of the US Real ID Act, and Andrew Adams' What's Yours is Mine and What's Mine's My Own for a discussion of complex principals such as joint accounts, agents, trustees and other issues of real-world shared control and delegation. A cunning fraud shows what can go wrong when you don't manage deduplication properly. And the issues of social justice around name changes by trans people and abuse survivors are discussed here.

Chapter 8: a Jason Report from the Mitre Corporation tells the history of the modern system of information classification since WW2, and discusses its shortcomings; this report has influenced recent technical research. (For the history up till 1940, see here and in particular here.) On the policy front, an article in the Washington Post describes the proliferation of employment in US intelligence agencies since 9/11. By 2010, over 854,000 people held Top Secret clearances; being scattered over dozens of agencies that don't communicate much with each other, they often duplicate each others' work, and the volume of their output was such as to overwhelm the small number of people at the White House who're cleared to see all of it. The biggest outcome was the Snowden affair; the backlash after that included a false accusation against a senior US diplomat which damaged relations with Pakistan, and perhaps also the aggressive behaviour of the FBI towards Hilary Clinton over her email server..

The recently declassified NSA history, American Cryptology During the Cold War 1945-1989 gives many insights into the difficulties of running compartmentation systems while maintaining operational effectiveness. Volume 2, p 275ff, gives another account of the origins of the modern classification system. There is also a fascinating piece by Daniel Ellsberg on his experience in dealing with classified data in the Nixon administration: once people get high-level clearances they typically ignore the opinions of uncleared people for a while, asking themselves "would he be saying this if he had access to the stuff the President and I have access to?" It typically takes an official three years to realise how foolish this is – and many of a President's staff don't last much longer than that.

The UK government used to keep its rules for handling classified data classified. After a scandal in which Her Majesty's Revenue and Customs lost 25 million people's records, and it turned out that people who should have read and understood the Manual of Protective Security weren't allowed to, this changed: you can now download and read the HMG Security Policy Framework. There is also a leaked copy of the Defence Manual of Security which gives the full monty.

Chapter 9: A Judgment of the European Court of Human Rights found against Finland for not providing sufficient compartmentation in its medical record systems to protect the privacy of hospital staff against the curiosity of colleages. This may have significant implications for public-sector systems: see our report, Database State. See also our Nuffield report on what happens to health privacy in a world of cloud-based medical records and pervasive genomics.

An influential recent idea is Cynthia Dwork's differential privacy which provides a provable framework for active disclosure control; her book has a detailed exposition of the field. Paul Ohm's paper Broken Promises of Privacy: Responding to the Surprising Failure of Anonymization has put inference control centre-stage in the policy debate; things that we engineers have known for thirty years have suddenly been forcefully brought to the attention of lawyers (Paul is a well-known law professor). The consequences could be wide-ranging. For example, the UK Office of National Statistics has published a number of papers about how they plan to deal with statistical disclosure control. That this is a hard problem is also well illustrated by an amusing NSA error.

Chapter 10: We have an award-winning paper showing a serious protocol failure in EMV. The protocol security issue I talked about in the book, namely SDA versus DDA, turns out not to matter much. With DDA, when the card is first inserted into the terminal, it sends a nonce NC to the terminal along with its certificates. The terminal sends a nonce NT plus (in some implementations) CVMR, the Card Verification Method Result (which if present can be compared to the CVR and used to detect our attack). The card then signs NC, NT and CVMR if present. The transaction then proceeds just as with SDA, with the PIN being sent to the card in the clear! We understand this is because hundreds of thousands of terminals would otherwise have had to be upgraded. There is also a command at startup, "get processing options", the answer to which specifies whether SDA or DDA is to be used; this may permit an attacker to force a transaction to fall back from DDA into SDA. All in all, it's unclear why the banks bother moving to DDA at all. It looks like a compromise between SDA and the more robust but more expensive CDA. Overall, though, EMV is over-complex with too many options, too many combinations of which are insecure. It badly needs an overhaul, and the line we're taking is that the US banking industry should not be allowed to roll out EMV until the bugs are fixed.

Chapter 11: a record-breaking burglary in London may have relied on a fire that cut power and internet access to a large part of London. A large theft from a museum in Paris seems to have involved an alarm system that broke (or was sabotaged) some time previously. However thefts from museums mostly involve insiders; see the Journal of Physical Security v 4 no 1. There's also an interesting philosophical issue about unauthorised acquisitions: if a disruptive artist like Banksy leaves one of his paintings on the wall of your museum and you don't notice it for a while, is that a security failure? There's much more on physical security here.

Chapter 13: Here are photographs of a PAL, a PAL reprogrammer and a PAL verifier taken at the National Museum of Nuclear Science and History, Albuquerque Steven J. Greenwald. Finally, there's an article on China's nuclear weapon security.

Herbert L Abrams' Human Reliability and Safety in the Handling of Nuclear Weapons reports three murders committed by cleared personnel and the organisational response. 1 in 12 Navy staff are dismissed during first enlistment for psychological problems; 3.8% of nuke sub crews needed to see a shrink. Of 96K sacked, 23K were for alcohol dependence; 14K drug use; 8K personality disorders; 6K each schizophrenia and other psychosis. Since 1975, drug use has fallen from 38% to 18% while alcohol has gone up from 3% to 18%. He concludes that there are over 1900 unstable individuals in the nuclear forces at any one time (and notes that neither submarine nor carrier-borne nukes have PALs).

There are also some useful papers from Sandia on the unique signal mechanisms used to assure detonation safety in nukes.

Chapter 14: There are two fascinating papers by Henk de Heij of the Dutch central bank on the security usability of banknote design: part 1 and (especially) part 2. There's also an excellent chapter on the History of Document Security by Karel Johann Schell in a recent collection of papers on The History of Information Security.

Chapter 15: Galton's original paper on the uniqueness of fingerprints is here, and there's a history of biometric technology by James Wayman in The History of Information Security. Work on the UK ID card scheme has thrown up an exception handling problem: if you identify people based on fingerprints, how do you deal with the millions of over-75s from whom good fingerprints are often hard to obtain? Finally, on Jan 1 2009 we had our first report of a real attack, on Japan's immigration fingerprint reader system.

There is also recent disquiet about DNA evidence. Police experts have been telling courts for years that the odds against a mismatch are millions or even billions to one. This now turns out to be misleading; in fact, in the Arizona database alone, with a mere 65,000 felons, there are 122 pairs of samples that match at 9 out of 13 loci (the standard test) and twenty pairs that match at 10 loci. It's down to our old friend the birthday paradox. Matching a single sample against the whole Arizona database should have a false positive rate of 1 in 279. A good summary of the problems with DNA evidence can be found here.

Chapter 17: A significant addition to the history of emsec was declassfied on Christmas Eve 2008: A History of U.S. Communications Security (Volumes I and II); the David G. Boak Lectures (NSA, 1973). This tells the tale of how the USA discovered the Tempest threat during World War 2, then again in 1951, then again in 1962 ... and kept on finding the available solutions "too hard". (The emsec history section starts from p 85 of the pdf - p 89 of the original.) Another version appeared in 2010 on Cryptome and yet another in American Cryptology During the Cold War 1945-1989, Vol 2, from p 221. The invention of low-probability-of-intercept communication is described in the same book, Vol 2, from p 187.

Chapter 20: Colleagues and I acquired the ability, in early 2009, to set caller ID on our university phones to anything we like. We are on our honour not to do anything naughty with this feature!

Chapter 21: See the notes of Mikko Hypponen's course for more on malware, and RFC 3234 for more on what goes wrong with middleboxes like firewalls, tunnel endpoints, NAT and wiretap equipment. For a detailed description of how social engineering is used to get people to install malware, see The snooping dragon which explains how the Chinese spooks compromised computers at the Dalai Lama's private office in the run-up to the Peking Olympics.

Chapter 24: Recently declassified NSA papers with relevance to crypto history and traffic analysis are here; older material here covers an even wide range of topics from deception to Tempest and electronic warfare.

Chapter 25: There's a nice article in the Economist about why security management is hard. They're talking about why it's hard to run a risk-management department in a bank, but the lessons pretty well all go across to our field.

Chapter 26: Nancy Leveson's book System Safety Engineering: Back To The Future is available online and well worth a look: it teaches lessons from the inadquacy of `blame and train' to the dangers of a high false alarm rate. See also some experiments at Berkeley which showed that maliciously inserted vulnerabilities were extremely hard for even capable and motivated reviewers to find; design for verifiability is still an unsolved problem.

For an example of truly awful failure of a Common-Criteria evaluated product, see here. Evaluations below about EAL6 aren't worth much, while those at this level are too hard to get – or so restricted as to be meaningless.

Return to the book's home page.