Security Group
2004 seminars
13 December 17:15National Security on the Line: Electronic Communications in an Age of Terror / Susan Landau, Sun Microsystems Laboratories
Lecture Theatre 2, William Gates Building
Wiretaps have been an element of U.S. law-enforcement and foreign-intelligence investigations for over a quarter century. During this period, communications technology has substantially changed. Law enforcement has sought to keep laws current with the new technology. But new technology brings new threats and it is not clear that the FBI's latest efforts to extend the Communications Assistance for Law Enforcement Act (CALEA) to Voice over IP would actually improve the total security equation. In this talk, we discuss national-security and law-enforcement wiretapping and the Internet, and what security means in this context.
23 November 16:15Questioning the Usefulness of Identity-based Key Cryptography / Yvo Desmedt, UCL
Since Boneh-Franklin's 2001 paper on "Identity based encryption from the Weil pairing," the research on identity based cryptography and the work on applying bilinear maps to cryptography are both flourishing. Shamir, in 1984, proposed the idea of "identity-based" cryptography to avoid a Public Key Infrastructure. Instead of having the users have their own public key, the identity of the user is the "public key," and a trusted center provides each party with a secret key.
We critically analyze whether Shamir's identity-based concept allows us to avoid a public key infrastructure. We argue the need for at least a registration infrastructure, which we call a "basic Identity-based Key Infrastructure." Moreover, we demonstrate that, if secret keys of users can be stolen or lost, the infrastructure required to deal with this is as complex as the one of PKI. Our discussion extends to the case the traditional PKI is replaced by an on-line PKI, as introduced by Rivest (1998).
We conclude by surveying possible useful applications of identity-based cryptography. Note: no number theory will be used in this lecture.
16 November 16:15Detection of LSB Matching Steganography in Images / Andrew Ker, Oxford University Computer Laboratory
26 October 16:00Data remanence in non-volatile semiconductor memories. Part I: Introduction and non-invasive approach / Sergei Skorobogatov, University of Cambridge
Security protection in microcontrollers and smartcards with EEPROM/Flash memories is based on the assumption that information from the memory disappears completely after erasing. Chip manufacturers were very successful in making their hardware design very robust to all sorts of attacks. But they had a common problem of data remanence in floating gate transistors. The information stored inside a EEPROM/Flash cell in the form of a charge on the floating gate changes some parameters of the storage transistor, so that even after an erase operation the transistor does not get back to its initial state, thereby allowing the attacker to distinguish between previously programmed and not programmed transistors and restore the information from the erased memory. In practice the attack can be done in different ways. The cheapest way is to measure the parameters of the transistor non-invasively by observing voltage and time dependant characteristics of each memory cell inside the array. Fortunately for security, this only works with a very limited number of chips. However, the fact that the information does not disappear completely after the memory erase, forces developers to implement additional protection. This talk summarises the research done in this direction so far and shows how much information can be extracted from some Microchip PIC microcontrollers after their memory has been 'erased'.
15 October 16:30Exploiting the Transients of Adaptation for RoQ Attacks on Internet Resources / Azer Bestavros, Boston University Computer Science Department
Over the past few years, Denial of Service (DoS) attacks have emerged as a serious vulnerability for almost every Internet service. An adversary bent on limiting access to a network resource could simply marshal enough client machines to bring down an Internet service by subjecting it to sustained levels of demand that far exceed its capacity, making that service incapable of adequately responding to legitimate requests. In this talk I will expose a different, but potentially more malignant adversarial attack that exploits the transients of a system's adaptive behavior, as opposed to its limited steady-state capacity. In particular, I will show that a determined adversary could bleed an adaptive system's capacity or significantly reduce its service quality by subjecting it to an unsuspicious, low-intensity (but well orchestrated and timed) request stream that causes the system to become very inefficient, or unstable. I will give examples of such "Reduction of Quality" (RoQ) attacks on a number of common adaptive components in modern computing and networking systems. RoQ attacks stand in sharp contrast to traditional brute-force, sustained high-rate DoS attacks, as well as recently proposed "shrew" attacks that exploit specific protocol settings. I will present numerical and simulation results, which are validated with observations from real Internet experiments.
This work was done in collaboration with Mina Guirguis and Ibrahim Matta.
28 September 16:45AUTODAFÉ: An act of software torture / Martin Vuagnoux, Ecole Polytechnique Fédérale de Lausanne
In his 1950 paper "Computing Machinery and Intelligence", Turing highlighted, for the first time, the risks of bad input validation in software. The problem has not gone away. Buffer overflows, which account for a third of the vulnerabilities discovered in the past decade, are today the best studied example.
Automatic vulnerability-search tools have lead to an explosion in the rate at which such flaws are discovered today. One particular technique is fault injection, the insertion of random, atypical data into input files or protocol packets, combined with monitoring memory violations. Existing tools for this are still rather crude. Their success is more testimony to the high density of flaws in fielded software than the result of good test coverage. This talk presents a new optimized approach for performing such "fuzzing" tests and will include a demonstration of the "Autodafé" tool that implements it.
26 July 16:15Threats to Privacy from Passive Internet Traffic Monitoring / Brian Levine, University of Massachusetts
With widespread acceptance of the Internet as a public medium for communication and information retrieval, there has been rising concern that the personal privacy of users can be eroded by malicious persons monitoring the network.
A technical solution to maintaining privacy is to provide anonymity. There have been a number of protocols proposed for anonymous network communication. We show there exist attacks based on passive traffic monitoring that degrade the anonymity of all existing protocols. We use this result to place an upper bound on how long existing protocols, including Crowds, Onion Routing, Mix-nets, and DC-Net, can maintain anonymity in the face of the attacks described. �This provides an analytical measure by we can compare the efficacy of all protocols. Our analytical bounds are supported by tighter results from simulations, and we made empirical measurements of our assumptions. We found that mix-based protocols offer the best tradeoff of performance and security.
In our most recent work, we have looked at attacks to detect signatures of users and webservers that persist over days or weeks. VPNs created by ssh tunnels or secure wireless connections (e.g., WEP) as implemented are not sufficient to block these signatures, even though they provide more protection than SSL-based connections that have been looked at previously for the same problem. We designed an attack and evaluated it with real Internet measurements: allowed a training period, we found an attacker could guess which exact web site (in the training set) was visited by a user through an encrypted link almost 40% of the time; 70% of the time the correct answer was in the attacker�s top five guesses. (A random guess had less than 1% chance of success.)
15 July 16:15Cybersecurity and Its Limitations / Andrew Odlyzko, University of Minnesota Digital Technology Centre
Network security is terrible, and we are constantly threatened with the prospect of imminent doom. Yet such warnings have been common for the last two decades. In spite of that, the situation has not gotten any better. On the other hand, there have not been any great disasters either. To understand this paradox, we need to consider not just the technology, but also the economics, sociology, and psychology of security. Any technology that requires care from millions of people, most very unsophisticated in technical issues, will be limited in its effectiveness by what those people are willing and able to do. The interactions of human society and human nature suggest that security will continue being applied as an afterthought. We will have to put up with the equivalent of baling wire and chewing gum, and to live on the edge of intolerable frustration. However, that is not likely to block development and deployment of information technology, because of the non-technological protection mechanisms in our society.
Slides are here.
8 June 16:15Privacy Protection in Ubiquitous Computing / Alf Zugenmaier, Microsoft Research, Cambridge
4 May 16:15Ubiquitous Utopia: Evolution, opportunities and security challenges / Chan Yeob Yeun, Toshiba Research Europe, Bristol
I will discuss the evolution of ubiquitous computing. Future ubiquitous communications systems will enable interaction between an increasingly diverse range of devices, both mobile and fixed. This will allow users to construct their own ubiquitous services using a combination of different communications technologies. Dynamic, heterogeneous and distributed networks will create new opportunities, such as the convergence of communications and highly adaptive reconfigurable terminals. They will also bring new challenges. I will discuss the particular problems involved in securing such ubiquitous environments. My goal is to establish a series of requirements for future security architectures, and future directions that might lead towards the ubiquitous utopia.
25 March 16:15Engineering a distributed hash table / Frans Kaashoek, MIT
Distributed hash tables (DHTs) are a popular approach to building large-scale distributed applications in the research community. They store data with high availability and they allow data to be looked up quickly, even when nodes are leaving and joining the system at a high rate. DHTs are also decentralized, requiring no organization to be in charge of the management. Only a few operational DHTs exist, however, because most research has focused on the design of the lookup protocol to find data in DHT. We have found that given enough network bandwidth every lookup protocol can be made to work well; the real challenge in designing a distributed hash table is engineering the details. This talk summarizes our experience with engineering the Chord distributed hash table. Joint work with: Frank Dabek, Jinyang Li, Robert Morris, Emil Sit, and Jeremy Stribling.
18 March 16:15Why Internet voting is insecure: a case study / Barbara Simons, ACM
The U.S. Department of Defense had been planning to run an Internet-based voting "experiment" called SERVE (Secure Electronic Registration and Voting Experiment) for the 2004 presidential primaries and general election. In order to evaluate the security of SERVE, a group of computer scientists was asked to review the program. On Jan. 21, 2004 four members of the review panel, including the speaker, produced a report, available at www.servesecurityreport.org, that analyzed the security risks of SERVE and called for SERVE to be shut down. On Feb. 3, 2004, the Department of Defense cancelled SERVE.
In this talk I shall discuss the security problems with Internet voting in general and SERVE in particular. If time permits, I'll also discuss some vulnerabilities of other forms of voting such as paperless touch screen machines.
Speaker:
Barbara Simons is a technology policy consultant. She earned her Ph.D. from U.C. Berkeley, and was a computer science researcher at IBM Research, where she worked on compiler optimization, algorithm analysis, and scheduling theory. A former President of the Association for Computing Machinery (ACM), Simons co-chairs the ACM's US Public Policy Committee (USACM). She served on the NSF panel on Internet Voting, the President's Export Council's Subcommittee on Encryption, and the President's Council on the Year 2000 Conversion. She is on several Boards of Directors, including the U.C. Berkeley Engineering Fund and the Electronic Privacy Information Center, as well as the Advisory Board of the Oxford Internet Institute and the Public Interest Registry's .ORG Advisory Council. She has testified before both the U.S. and the California legislatures. She is a Fellow of ACM and the American Association for the Advancement of Science. She received the Alumnus of the Year Award from the Berkeley Computer Science Department, the Norbert Wiener Award from CPSR, the Outstanding Contribution Award from ACM, and the Pioneer Award from EFF.
16 March 16:15On the anonymity of anonymity systems / Andrei Serjantov, Computer Lab
The speaker will talk about anonymous communication systems and the relatively new field of analysis of their anonymity properties. He will introduce the subject, look at some of the ways of achieving anonymous communications, define the requirements and threat models, and then talk about a few of the methods used in their analysis.
9 March 16:15Location privacy / Alastair Beresford, Laboratory for Communication Engineering, University of Cambridge
Privacy of personal location information is becoming an increasingly important issue. This talk discusses some of the challenges of providing location privacy whilst at the same permitting location-based services to function. Most methods of enabling location privacy in the literature use access control; this talk introduces the mix zone model which takes a different approach, enabling location privacy through anonymisation. A mathematical model is developed to provide a quantitative measure of anonymity and a method of providing direct feedback to the user is discussed.
17 February 16:15The traffic analysis of anonymity systems / George Danezis, Computer Lab
In anonymous communications, as in other fields of computer security, the study of attack and defence go hand in hand. It might therefore seem strange that, until recently, the study of "traffic analysis" has not attracted a lot of attention. In this talk, recent quantitative breakthroughs are presented in understanding how traffic analysis is performed. They are used to quantify the cost of attacking generic anonymous communication systems. The focus then shifts towards high-bandwidth low-latency systems like "onion routing". We show how the features remaining in the anonymised streams of traffic can be used to trace them, and provide techniques that scale to de-anonymise whole networks.
10 February 16:15A monster emerges from the Chrysalis / Mike Bond, Computer Lab
The speaker has spent some time developing Security API attacks that trick hardware security modules (HSMs) into revealing their secrets by sending unusual sequences of commands to their published APIs. But how hard is it to phyiscally open up the device, and "walk in the front door"? This talk describes the speaker's experiences reverse-engineering the 'Luna CA3'. The Luna CA3 is a Hardware Security Module manufactured by Chrysalis-ITS, used in Certification Authorities all over the world. The talk begins with an informal recounting of how the reverse-engineering process progressed, and the various challenges arising on the way. It then explains the results: the exploitation of the internal API to defeat manufacturer lock-in, and identification of the weak spots for more serious attacks which may lead to full compromise. It concludes by looking at the lessons learned from a direct attack on an HSM.
3 February 16:15Extrusion detection / Richard Clayton, Computer Lab
End users are often unaware that their systems have been compromised and are being used to relay bulk unsolicited email (spam). However, automated processing of the email logs recorded on the "smarthost" provided by an ISP for their customer's outgoing email can be used to detect this activity. These logs do not contain any of the content of the email, or even the subject lines. However, the variability and obfuscation of sender and receiver that is used by spammers to avoid detection at the destination creates distinctive patterns at the source that permits legitimate email traffic to be distinguished from spam. Some relatively simple heuristics result in the detection of low numbers of "false positives" despite tuning to ensure few "false negatives".