Computer Laboratory

Security Group

All seminars

Expand all Collapse all

If you can't find a talk you are looking for on this page, try the old archives.

2014

View original page

30 September 14:00Bitcoin as a source of verifiable public randomness / Joseph Bonneau, Center For Information Technology Policy, Princeton

Lecture Theatre 2, Computer Laboratory, William Gates Building

*Abstract:*
Many security protocols can be strengthened by a public randomness beacon: a source of randomness which can be sampled by anybody after time t, but is strongly unpredictable to anybody prior to time t. Applications include public lotteries, election auditing, and multiple cryptographic protocols such as cut-and-choose or fair contract signing. Until recently, all proposals for instantiating a beacon either rely on a trusted third party (such as the NIST beacon or random.org) or have difficult-to-evaluate security properties (such as hashing stock market data). In this talk we introduce a new construction for building a beacon based on Bitcoin's block chain. This beacon outputs 64 bits of min-entropy every 10 minutes on average and we can prove strong financial lower bounds on the cost of manipulating the output which are at least in the tens of thousands of dollars. We discuss constructions for building a manipulation-resistant lottery, a new security construction, on top of this primitive which can make attacks even more expensive. Finally, we discuss a number of interesting smart contracts that can be efficiently implemented by extending Bitcoin script to enable sampling the beacon output, including secure multi-party lotteries and self-enforcing non-interactive cut and choose.

*Bio:*
Joseph Bonneau is a Postdoctoral Research Fellow at the Center for Information Technology Policy, Princeton. His research interests include passwords and web authentication, Bitcoin and cryptocurrencies, HTTPS, and secure messaging software. He received a PhD from the University of Cambridge under the supervision of Ross Anderson and an MS from Stanford under the supervision of Dan Boneh. He has worked at Google, Yahoo, and Cryptography Research Inc.

View original page

23 September 14:00DP5: Privacy-preserving Presence Protocols / Ian Goldberg, University of Waterloo [currently on sabbatical at the University of Cambridge]

Lecture Theatre 2, Computer Laboratory, William Gates Building

*Abstract:*
Users of social applications like to be notified when their
friends are online. Typically, this is done by a central server keeping
track of who is online and offline, as well as of the complete friend
graph of users. However, recent NSA revelations have shown that address
book and buddy list information is routinely targetted for mass
interception. Hence, some social service providers, such as activist
organizations, do not want to even possess this information about their
users, lest it be taken or compelled from them.

In this talk, we present DP5, a new suite of privacy-preserving presence
protocols that allow people to determine when their friends are online
(and to establish secure communications with them), without a
centralized provider ever learning who is friends with whom. DP5
accomplishes this using an implementation of private information
retrieval (PIR), which allows clients to retrieve information from
online databases without revealing to the database operators what
information is being requested.

*Bio:*
Ian Goldberg is an Associate Professor of Computer Science and a
University Research Chair at the University of Waterloo, where he is a
founding member of the Cryptography, Security, and Privacy (CrySP)
research group. His research focuses on developing usable and useful
technologies to help Internet users maintain their security and privacy.
He is a Senior Member of the ACM and a winner of the Electronic Frontier
Foundation's Pioneer Award. He is currently on sabbatical as a Visiting
Fellow at Clare Hall, University of Cambridge.

View original page

10 September 13:00Micro-Policies: A Framework for Tag-Based Security Monitors / Benjamin C. Pierce, University of Pennsylvania

FW26, Computer Laboratory, William Gates Building

*Abstract:*
Current cybersecurity practice is inadequate to defend against the
threats faced by society. A host of vulnerabilities arise from the
violation of known—-but not enforced—-safety and security policies,
including both high-level programming models and critical invariants of
low-level programs. Unlike safety-critical physical systems (cars,
airplanes, chemical processing plants), present-day computers lack
supervising safety interlocks to help prevent catastrophic failures.

We argue that a rich and valuable set of low-level MICRO -POLICIES can
be enforced at the hardware instruction-set level to provide such safety
interlocks with modest performance impact. The enforcement of these
micro-policies provides more secure and robust macro-scale behavior for
computer systems. We describe work originating in the DARPA CRASH /SAFE
project (www.crash-safe.org) to (1) introduce an architecture for ISA
-level micro-policy enforcement; (2) develop a linguistic framework for
formally defining micro-policies; (3) identify and implement a diverse
collection of useful micro-policies; (4) verify, through a combination
of rigorous testing and formal proof, that combinations of hardware and
software handlers correctly implement the desired policies and that the
policies imply specific high-level safety and security properties; and
(5) microarchitecture to provide hardware support with low performance
overhead and acceptable resource costs. Thus, emerging hardware
capabilities and advances in formal specification and verification
combine to enable engineering systems with strong security and safety
properties.

*Bio:*
Benjamin Pierce is Henry Salvatori Professor of Computer and
Information Science at the University of Pennsylvania and a Fellow of
the ACM . His research interests include programming languages, type
systems, language-based security, computer-assisted formal verification,
differential privacy, and synchronization technologies. He is the author
of the widely used graduate textbooks Types and Programming Languages
and Software Foundations. He has served as co-Editor in Chief of the
Journal of Functional Programming, as Managing Editor for Logical
Methods in Computer Science, and as editorial board member of
Mathematical Structures in Computer Science, Formal Aspects of
Computing, and ACM Transactions on Programming Languages and Systems. He
is also the lead designer of the popular Unison file synchronizer.

View original page

09 September 14:00From TLS to secure websites: the HTTPS landmine / Antoine Delignat-Lavaud, Inria Paris, team Prosecco (Programming Securely with Cryptography

Lecture Theatre 2, Computer Laboratory, William Gates Building

*Abstract:*
TLS, the most ubiquous cryptographic protocol used on the Internet, has received a lot of recent attention from the academic community, motivated by a string of high-impact attacks. This verification effort has led to the discovery of a new complex attack against the protocol on one hand, and to a security proof in the computational model based on a reference implementation that supports a wide range of features used in practice on the other hand.

However, despite these efforts, the security of actual websites remains widely undermined by weaknesses at the interface between the TLS library and applications, or in the application protocol itself. For instance, security events at the transport layer, such as improper termination of the connection, or a change of the peer identity during transitions between sessions of the TLS protocol, are typically ignored or mishandled by the application. Similarly, the TLS library delegates some of the most critical security decisions, such as authorization and session cache management, entirely to the applications. Combined with the complex security characteristics of HTTP, this leads to a range of practical, high-impact attacks against even the most secure and scrutinized websites.

*Bio:*
Antoine Delignat-Lavaud is a PhD student at Inria Paris under the supervision of Karthikeyan Bhargavan in team Prosecco (Programming Securely with Cryptography). While the original topic of his thesis is Web security, his attempts to model the security of websites against strong attackers have led him to spend over a year working on TLS and the PKI with his colleagues from Inria and Microsoft Research.

View original page

29 July 15:15Safe Shell Scripting with Capabilities and Contracts / Scott Moore, PhD student, Harvard

Lecture Theatre 2, Computer Laboratory, William Gates Building

*Abstract:*
The Principle of Least Privilege suggests that software should be executed with no more authority than it requires to accomplish its task. Current security tools make it difficult to apply this principle: they either require significant modifications to applications or do not facilitate reasoning about combining untrustworthy components.
We propose Shill, a secure shell scripting language. Shill scripts enable compositional reasoning about security through declarative
security policies that limit the effects of script execution, including the effects of programs invoked by the script. These security policies are a form of documentation for consumers of Shill scripts, and are enforced by the Shill execution environment.
We have implemented a prototype of Shill for FreeBSD. Our evaluation indicates that Shill is a practical and useful system security tool, and can provide fine-grained security guarantees.

*Bio:*
Scott Moore is a PhD student in the Programming Languages group at Harvard University. Currently, he is working with Stephen Chong on improving the security of commodity operating systems.
In general, he is interested in programming language techniques and formal methods that help programmers write safe, correct, and understandable software.

View original page

03 June 15:00Trust, Religion, and Tribalism: Reflections on the Sociological Data from the Balkans / Gorazd Andrejč, Junior Research Fellow, Woolf Institute, Cambridge

FW26, Computer Laboratory, William Gates Building

*Abstract:*
Recent sociological studies on interethnic and interfaith relations and reconciliation (Kuburic et al. 2006, Wilkes et al. 2013) have highlighted the importance of (mis)trust, encoded in the perceptions of (in)security and of each other among dominant ethnic groups, for reconciliation attempts in Bosnia-Herzegovina, as well as politics in the region. In this talk, I will reflect on these data with a help of some philosophy (Wittgenstein, Onora O’Neill) and discursive study of different religious and secular narratives and perceptions of each other among Serbs, Bosniaks, Croats and ‘others’ in Bosnia. Examining chosen representations of (each) other in these discourses, I will suggest that they manifest different kinds of trust and mistrust (non-reflective, reflective/conscious, fear-based, dogmatic, idealized, etc.).

*Bio:*
Dr Gorazd Andrejč is a Junior Research Fellow at The Woolf Institute and an Associate Member of St Edmund’s College, Cambridge. His research is in theological and philosophical perspectives of religious language, the nature of belief, as well as interfaith relations and disagreement, especially in the Balkans and Central Europe. Previously, he was an Associate Lecturer teaching Philosophy of Religion in the Department of Theology and Religion at the University of Exeter, where he also completed his PhD in philosophical theology.

View original page

13 May 15:00Security, Reliability and Backdoors / Dr Sergei Skorobogatov, Security Group, University of Cambridge Computer Laboratory

Lecture Theatre 2, Computer Laboratory, William Gates Building

*Abstract:*
Backdoors present in hardware or embedded firmware is a potential security
threat. However, the reason for their existence is questionable. In this talk
implications imposed by backdoors on real systems will be presented at various
levels from silicon hardware (SoC FPGA), through embedded firmware (Smartcard)
to system software (Industrial controller). I will show how the backdoors can
be found and exploited. The aim of this talk is to raise a discussion about
the influence of backdoors on security and reliability.

*Bio:*
Dr Sergei Skorobogatov is a Senior Research Associate at the University of
Cambridge Computer Laboratory and a member of the Security Group. He received
Ph.D. degree in Computer Science from the University of Cambridge Computer
Laboratory in 2005. His research interests include hardware security analysis
of smartcards, microcontrollers, FPGAs and ASICs. He pioneered optical fault
injection attacks in 2001, which have influenced major rethink within
semiconductor industry on the security protection of semiconductor chips and
forced introduction of new evaluation procedures and countermeasures. His
latest research is about backdoors and Trojans in hardware devices.

View original page

06 May 15:00Psychology of malware warnings / David Modic, Cambridge University

Lecture Theatre 2, Computer Laboratory, William Gates Building

*Abstract:*
Internet users face large numbers of security warnings, which they mostly ignore. To improve risk communication, warnings must be fewer but better. We report an experiment on whether compliance can be increased by using some of the social-psychological techniques the scammers themselves use, namely appeal to authority, social compliance, concrete threats and vague threats. We also investigated whether users turned off browser malware warnings (or would have, had they known how).


*Bio*:
Dr. David Modic, an economic psychologist, is a research associate at the University of Cambridge’s Computer Laboratory. He has been researching social aspects of the Internet (i.e. cybercrime, virtual deviance, intrusions into virtual body etc) for the past fifteen years. He has been focusing lately on Internet fraud and the psychological mechanisms that are enabling it. More on: http://david.rodbina.org

View original page

29 April 15:00Protecting Programs During Resource Retrieval / Professor Trent Jaeger, CSE Department, Pennsylvania State University

Lecture Theatre 2, Computer Laboratory, William Gates Building

*Abstract:*
Programs must retrieve many system resources to execute properly, but
there are several classes of vulnerabilities that may befall programs
during resource retrieval. These vulnerabilities are difficult for
programmers to eliminate because their cause is external to the
program: adversaries may control the inputs used to build names,
namespaces used to find the target resources, and the target resources
themselves to trick victim programs to retrieve resources of the
adversaries' choosing. In this talk, I will present a system
mechanism, called the Process Firewall, that protects programs from
vulnerabilities during resource retrieval by introspecting into
running programs to enforce context-specific rules. Our key insight
is that using introspection to prevent such vulnerabilities is safe
because we only aim to protect processes, relying on access control to
confine malicious processes. I will show that the Process Firewall
can prevent many types of vulnerabilities during resource retrieval,
including those involving race conditions. I will also show how to
perform such introspection and enforcement efficiently, incurring much
lower overhead than equivalent program defenses. Finally, I will
describe a conceptual model that describes the conditions for safe
resource retrieval, and outline how to produce enforceable rules from
that model. By following this model, we find that the Process
Firewall mechanism can prevent many vulnerabilities during resource
retrieval without causing false positives.

*Bio:*
Trent Jaeger is a Professor in the Computer Science and Engineering
Department at The Pennsylvania State University and the Co-Director of
the Systems and Internet Infrastructure Security Lab. Trent's
research interests include systems security and the application of
programming language techniques to improve security. He has published
over 100 referreed papers on these topics and the book "Operating
Systems Security," which examines the principles behind secure
operating systems designs. Trent has made a variety of contributions
to open source systems security, particularly to the Linux Security
Modules framework, SELinux, integrity measurement in Linux, and the
Xen security architecture. He is currently the Chair of the ACM
Special Interest Group on Security, Audit, and Control (SIGSAC) and
Program Chair of ASIACCS 2014. Trent has an M.S. and a Ph.D. from the
University of Michigan, Ann Arbor in Computer Science and Engineering
in 1993 and 1997, respectively, and spent nine years at IBM Research
prior to joining Penn State.


View original page

22 April 15:00Website Fingerprinting / Nikita Borisov, University of Illinois

Lecture Theatre 2, Computer Laboratory, William Gates Building

*Abstract:*
Network traffic, even when encrypted, contains patterns such as packet
sizes, counts, and timings, that can be used to infer sensitive
information about its contents. In particular, it is often possible to
infer which website a user is visiting, or which page within a site,
as each site has a distinctive "fingerprint" visible within the
traffic patterns. Website fingerprinting has been applied in a number
of contexts, including secure web browsing, virtual private networks,
and anonymous communications. Our recent work shows that it can even
be used to remotely monitor the activities of a home user connected
with a broadband modem. [1] I will present an overview of website
fingerprinting attacks and defenses, including our work in progress
that promises to simultaneously improve both the privacy and
performance of anonymous web browsing. [2]

*Bio:*
Nikita Borisov is an associate professor at the University of
Illinois at Urbana-Champaign. His research interests are online
privacy and network security. He is the co-designer of the
Off-the-Record (OTR) instant messaging protocol and was responsible
for the first public analysis of 802.11 security. He is also the
recipient of the NSF CAREER award. Prof. Borisov received his Ph.D.
from the University of California, Berkeley in 2005 and a B.Math from
the University of Waterloo in 1998.

References:
[1] http://hatswitch.org/~enikita/papers/rta-pets12.pdf
[2] http://hatswitch.org/~nikita/papers/pnp-poster-ccs13.pdf

View original pageView slides/notes

18 March 15:00Bitcoin: A Full Employment Act for security engineers? / Joseph Bonneau, Center For Information Technology Policy, Princeton

Lecture Theatre 2, Computer Laboratory, William Gates Building

*Abstract:*
This talk will provide a brief overview of Bitcoin and discuss why it has been a fascinating new area of security research spanning crypto, security economics, game theory, and anonymity. A few case studies will highlight some of the surprising new applications and research findings, as well as discussing why Bitcoin is far more limited in its current version that is commonly assumed.

*Bio:*
Joseph Bonneau is a fellow at the Center For Information Technology Policy, Princeton. He is focused on web security, authentication, and TLS, though his past research has spanned side-channel cryptanalysis, protocol verification, software obfuscation, and privacy in social networks.

He completed his PhD in 2012 with the Security Group of the University of Cambridge Computer Laboratory, supervised by Professor Ross Anderson and funded as a Gates Cambridge Scholar. His PhD thesis formalises the analysis of human-chosen distributions of secrets, specifically passwords and PINs.

His background is in computer science, math, and cryptography, in which he earned his BS and MS from Stanford. He's worked on cryptography and security at Google, Cryptography Research, Inc and as a private consultant.

View original page

25 February 15:00Introduction to DNSSEC / Tony Finch, University of Cambridge Computing Service

Lecture Theatre 2, Computer Laboratory, William Gates Building

*Abstract:*
This talk is a quick introduction to DNSSEC, the Domain Name System Security extensions. DNSSEC is interesting because it does more than just add tamper-proofing to the DNS: it is also a new public-key infrastructure.

The talk will describe the security features that DNSSEC adds (and does not add) to the DNS, and how the DNSSEC PKI can support other protocols such as SSL/TLS and SSH.

To be useful, DNSSEC needs to be widely deployed. The talks will demonstrate that switching on DNSSEC can be straight-forward, and will mention some of the traps and pitfalls that can catch the unwary.

Talk slides and materials are at
http://www-uxsup.csx.cam.ac.uk/~fanf2/dns/nws42/

*Bio:*
Tony Finch is a system administrator and developer in the University of Cambridge Information Services (until recently known as the Computing Service) where he helps to run the mail and DNS systems. He has contributed to a number of open source projects including Exim, BIND, SpamAssassin, FreeBSD, Apache httpd, and git. He participates in a number of IETF working groups related to mail and DNS, and has contributed draft documents to the DANE working group.

He is mildly notorious for his email address dot@dotat.at, and can be found online at http://dotat.at/ http://fanf.livejournal.com
https://twitter.com/fanf

View original page

11 February 15:00On the (in)security of widely-used RFID access control systems / Dr. Flavio D. Garcia, University of Birmingham

Lecture Theatre 2, Computer Laboratory, William Gates Building

*Abstract:*
Over the last few years much attention has been paid to the (in)security
of the cryptographic mechanisms used in RFID and contactless smart
cards. Experience has shown that the secrecy of proprietary ciphers does
not contribute to their cryptographic strength. Most notably the Mifare
Classic, which has widespread application in public transport ticketing
(e.g. Oyster) and access control systems, has been thoroughly broken in
the last few years. Other prominent examples include KeeLoq and Hitag2
used in car keys and CryptoRF used in access control and payment systems.

This talk summarizes our own contribution to this field. We will
briefly show some of the weaknesses we found in the Mifare classic. Then
we will show that the security of its higher-end competitors like
Atmel's CryptoRF and HID's iClass – which were proposed as secure
successors of the Mifare Classic – is not (significantly) higher. We will
also cover security issues of the Hitag2 key fob to conclude with a
discussion on responsible disclosure principles.

*Bio:*
Garcia is a faculty member in the Birmingham's Security and Privacy
Group, and is currently employed as a “Birmingham Fellow”. His work
focuses on the design and evaluation of cryptographic primitives and
protocols for small embedded devices like RFID and smart cards. His
research achievements include breakthroughs such as the discovery of
vulnerabilities in Mifare Classic, iClass, CryptoMemory and HiTag2. The
first of these, Mifare Classic, was widely used for electronic payment
(e.g. London Underground) and access control (e.g. Amsterdam Airport).
Garcia showed that the cryptography in the card was fatally flawed.
HiTag2, the most widely used key fob used in car keys was also found to
be insecure.

Garcia’s work has been widely recognised as world leading including
“Best Paper” awards from the leading IEEE Security & Privacy and Usenix
Woot conferences and the 2008 I/O Award from the Dutch research council
for the best paper bringing computer science research to the attention
of the general public. Garcia joined the security group at the
University of Birmingham in February 2013.

View original pageView slides

04 February 15:00The effect of decentralized behavioral decision making on system-level risk / Kim Kaivanto, Lancaster University

Lecture Theatre 2, Computer Laboratory, William Gates Building

*Abstract:*
Certain classes of system-level risk depend partly on decentralized lay decision making. For
instance, an organization’s network security risk depends partly on its employees’ responses
to phishing attacks. On a larger scale, the risk within a financial system depends partly on
households’ responses to mortgage sales pitches. Behavioral economics shows that lay decision
makers typically depart in systematic ways from the normative rationality of Expected Utility
(EU), and instead display heuristics and biases as captured in the more descriptively accurate
Cumulative Prospect Theory (CPT). In turn psychological studies show that successful decep-
tion ploys eschew direct logical argumentation and instead employ peripheral-route persuasion,
manipulation of visceral emotions, urgency, and familiar contextual cues. Signal Detection The-
ory (SDT) offers the standard normative solution, formulated as an optimal cutoff threshold,
for distinguishing between good/bad emails or mortgages. In this paper we extend SDT be-
haviorally by re-deriving the optimal cutoff threshold under CPT. Furthermore we incorporate
the psychology of deception into determination of SDT’s discriminability parameter. With the
neo-additive probability weighting function, the optimal cutoff threshold under CPT is rendered
unique under well-behaved sampling distributions, tractable in computation, and transparent
in interpretation. The CPT-based cutoff threshold is (i) independent of loss aversion and (ii)
more conservative than the classical SDT cutoff threshold. Independently of any possible mis-
alignment between individual-level and system-level misclassification costs, decentralized behav-
ioral decision makers are biased toward under-detection, and system-level risk is consequently
greater than in analyses assuming normative rationality.

*Bio:*
Kim's research issues from a core interest in decision making under risk and uncertainty. He works with both normative and descriptive behavioural mathematical models as well as the associated empirical models, and he designs and implements laboratory experiments for testing normative and behavioural hypotheses. Kim's recent projects have addressed questions in the areas of cyber security and financial decision making. Kim is Director of the recently established Lancaster Experimental Economics Laboratory (LExEL) and a member of the LUMS Research Ethics Committee.

View original pageView slides/notes

21 January 15:00Eavesdropping near field contactless payments: A quantitative analysis / Thomas P. Diakos, University of Surrey

Lecture Theatre 2, Computer Laboratory, William Gates Building

*Abstract:*
We present a quantitative assessment in terms of frame error rates for the
success of an eavesdropping attack on a contactless transaction using easily
concealable antennas and low cost electronics. An inductive loop, similar
in size to those found in mobile devices equipped with NFC capabilities,
was used to emulate an ISO 14443 transmission. For eavesdropping we used an
identical loop antenna as well as a modified shopping trolley. Synchronisation
and frame recovery were implemented in software. As a principal result of
our experiments we present the FER achieved over a range of eavesdropping
distances, up to 1m, at different magnetic field strengths within the range
specified by the ISO 14443 standard.

*Bio:*
Thomas is a PhD candidate at the University of Surrey, looking into the
security and privacy of near field contactless payments. He is currently
investigating how a combination of remote interrogation and eavesdropping
could be used to extract information from contactless devices that could
potentially cause financial or anonymity loss for the victim. Following his
military service, he studied for a BEng in electrical engineering from the
University of Sheffield and an MSc in communications and signal processing
from the University of Bristol.

View original page

14 January 15:00Privacy/Proxy/Perfidy – what criminals (and others) put in domain whois / Dr Richard Clayton, University of Cambridge

Lecture Theatre 2, Computer Laboratory, William Gates Building

*Abstract:*
I've recently completed a major study of the 'whois' contact details for
domain names used in malicious or harmful Internet activities. ICANN
wanted to know if a significant percentage of these domain registrations
used a privacy or proxy services to obscure the perpetrator’s identity ?
No surprises in our results: Yes!

What was perhaps surprising was that quite a significant percentage of
domains used for lawful and harmless activities ALSO used privacy and
proxy services.

But the real distinction is that when domains are maliciously
registered, then contact details are hidden in a range of different ways
so that 9 out 10 of these registrants are a priori uncontactable –
whereas the uncontactable rate varies between a quarter and at most two-
thirds for the non-malicious registrations.

This talk discusses how these results were obtained and what their
implications are for the future of the whois system. It also gives some
technical insight into the innovative design of whois parsing tool that
has enabled some extremely variable reporting formats to be handled, at
substantial scale, in an automated manner.

*Bio:*
Richard Clayton came back to Cambridge in 2000 to study for a PhD on
'Anonymity and Traceability in Cyberspace'. Since getting his degree he
has stayed on as an academic PostDoc "because it's more fun than
working". The main focus of his research is on cybercrime, and
particularly on 'phishing'. The ICANN project described in this talk was
done during his recently completed three year collaboration with the
National Physical Laboratory (NPL) on the EPSRC funded project "Internet
Security".

2013

View original pageView slides/notes

23 December 16:15Lawful Hacking: Using Existing Vulnerabilities for Wiretapping on the Internet. / Prof. Steven M. Bellovin, Columbia University

Lecture Theatre 2, Computer Laboratory, William Gates Building

*Abstract:*
For years, legal wiretapping was straightforward: the officer doing the intercept connected a tape recorder or the like to a single pair of wires. By the 1990s, though, the changing structure of telecommunications — there was no longer just “Ma Bell” to talk to — and new technologies such as ISDN and cellular telephony made executing a wiretap more complicated for law enforcement. Simple technologies would no longer suffice. In response, Congress passed the Communications Assistance for Law Enforcement Act (CALEA), which mandated a standardized lawful intercept interface on all local phone switches. Technology has continued to progress, and in the face of new forms of communication — Skype, voice chat during multi-player online games, many forms of instant messaging, etc.— law enforcement is again experiencing problems. The FBI has called this “Going Dark”: their loss of access to suspects’ communication. According to news reports, they want changes to the wiretap laws to require a CALEA-­like interface in Internet software.

CALEA, though, has its own issues: it is complex software specifically intended to create a security hole — eavesdropping capability — in the already-­complex environment of a phone switch. It has unfortunately made wiretapping easier for everyone, not just law enforcement. Congress failed to heed experts’ warnings of the danger posed by this mandated vulnerability, but time has proven the experts right. The so-­called “Athens Affair”, where someone used the built-­in lawful intercept mechanism to listen to the cell phone calls of high Greek officials, including the Prime Minister, is but one example. In an earlier work, we showed why extending CALEA to the Internet would create very serious problems, including the security problems it has visited on the phone system.

This talk explores the viability and implications of an alternative method for addressing law enforcement's need to access communications: legalized hacking of target devices through existing vulnerabilities in end-­user software and platforms.

*Bio:*
Steven M. Bellovin is a professor of computer science at Columbia University, where he does research on networks, security, and especially why the two don't get along, as well as related public policy issues. In his spare professional time, he does some work on the history of cryptography. He joined the faculty in 2005 after many years at Bell Labs and AT&T Labs Research, where he was an AT&T Fellow. He received a BA degree from Columbia University, and an MS and PhD in Computer Science from the University of North Carolina at Chapel Hill. While a graduate student, he helped create Netnews; for this, he and the other perpetrators were given the 1995 Usenix Lifetime Achievement Award (The Flame). Bellovin has served as Chief Technologist of the Federal Trade Commission. He is a member of the National Academy of Engineering and is serving on the Computer Science and Telecommunications Board of the National Academies, the Department of Homeland Security's Science and Technology Advisory Committee, and the Technical Guidelines Development Committee of the Election Assistance Commission; he has also received the 2007 NIST/NSA National Computer Systems Security Award.

Bellovin is the co-author of Firewalls and Internet Security: Repelling the Wily Hacker, and holds a number of patents on cryptographic and network protocols. He has served on many National Research Council study committees, including those on information systems trustworthiness, the privacy implications of authentication technologies, and cybersecurity research needs; he was also a member of the information technology subcommittee of an NRC study group on science versus terrorism. He was a member of the Internet Architecture Board from 1996-2002; he was co-director of the Security Area of the IETF from 2002 through 2004.

More details may be found at http://www.cs.columbia.edu/~smb/informal-bio.html.

View original page

03 December 16:15Reviewing Cybercrime; Epistemology, Political Economy and Models / Dr Michael McGuire, University of Surrey

Lecture Theatre 2, Computer Laboratory, William Gates Building

*Abstract:*
The recent publication of the UK Home Office’s paper “Cybercrime - a Review of the Evidence” forms a useful departure point for considering the way knowledge around online offending is currently produced and disseminated. As an evidence review, the aim of the paper was to assemble as comprehensive and up-to-date an overview of cybercrime as possible. But recurring issues around the availability and quality of evidence as well as the kind of evidence considered relevant by the research sponsors had important effects upon the content of the review. Equally, if not more important to its conclusions was the way the construct of ‘cybercrime’ was interpreted and presented within the typology underlying the offending categories. In this paper I set out a background to the research and consider some of the key methodological issues which arose, in particular the balances which had to be made between available knowledge, political expediency and the kinds of harmful behaviours considered worthy of inclusion within the review. I relate some of these issues to wider problems in the field of cybercrime research and link these problems to the technological fetishism which infects much of the thinking within this field. I conclude by outlining an alternative, more socially based conceptual model which I argue offers a more robust and, in the long term, adaptable framework for the understanding and policing of ICT enabled crime.


*Bio:*
Dr Michael McGuire is a Senior Lecturer in Criminology in the University of Surrey and has a particular interest in the study of technology and its impacts upon the justice system. His first book Hypercrime: The New Geometry of Harm (Glasshouse, 2008), critiqued the notion of cybercrime as a way of modelling computer enabled offending and was was awarded the 2008 British Society of Criminology runners up Book Prize. His most recent publication - Technology, Crime & Justice: The Question Concerning Technomia (Routledge, 2012) - provides one of the first overviews of the fundamental shifts in crime and the justice system arising from new technologies. His theoretical research is completed by a range of applied studies in this area, including recent work on the impacts of E-crime upon UK retail for the British Retail Consortium; a study of Organised Digital Crime Groups for BAE/Detica and a comprehensive evidence review of cybercrime for the Home Office.

View original page

26 November 16:15TESLA: Temporally-enhanced security logic assertions / Jonathan Anderson, University of Cambridge

Lecture Theatre 2, Computer Laboratory, William Gates Building

*Abstract:*
The security of complex software such as operating system kernels
depends on properties that we (currently) cannot prove correct. We can
validate some of these properties with assertions and testing, but
temporal properties such as access control and locking protocols are
beyond the reach of contemporary tools. TESLA is a compiler-based tool
that helps programmers describe and understand the temporal behaviour
of low-level systems code. Using temporal assertions (inspired by
linear temporal logic), developers can specify security properties and
validate them at run-time. We have used TESLA to validate OpenSSL API
use, find security-related bugs in the FreeBSD kernel and to explore
complex rendering bugs that were impervious to existing debugging
tools.

*Bio:*
Jonathan Anderson is a postdoctoral researcher in the security group
here at the CL. He works on tools that support application and OS
security as part of the CTSRD project. His PhD work (also at
Cambridge) explored the intersection of privacy and operating systems
concepts in the context of online social network.

View original page

18 November 14:00SCION: Scalability, Control, and Isolation On Next-Generation Networks / Prof. Adrian Perrig, Department of Computer Science at the Swiss Federal Institute of Technology (ETH)

Lecture Theatre 2, Computer Laboratory, William Gates Building

*Abstract:*
We present the first Internet architecture designed to provide route
control, failure isolation, and explicit trust information for
end-to-end communications. SCION separates ASes into groups of
independent routing sub-planes, called trust domains, which then
interconnect to form complete routes. Trust domains provide natural
isolation of routing failures and human misconfiguration, give
endpoints strong control for both inbound and outbound traffic,
provide meaningful and enforceable trust, and enable scalable routing
updates with high path freshness. As a result, our architecture
provides strong resilience and security properties as an intrinsic
consequence of good design principles, avoiding piecemeal add-on
protocols as security patches. Meanwhile, SCION only assumes that a
few top-tier ISPs in the trust domain are trusted for providing
reliable end-to-end communications, thus achieving a small Trusted
Computing Base. Both our security analysis and evaluation results
show that SCION naturally prevents numerous attacks and provides a
high level of resilience, scalability, control, and isolation.

*Bio:*
Adrian Perrig is a Professor of Computer Science at the Department of
Computer Science at the Swiss Federal Institute of Technology (ETH) in
Zürich, where he leads the network security group. From 2002 to 2012,
he was a Professor of Electrical and Computer Engineering, Engineering
and Public Policy, and Computer Science (courtesy) at Carnegie Mellon
University. He served as the technical director for Carnegie Mellon's
Cybersecurity Laboratory (CyLab). He earned his Ph.D. degree in
Computer Science from Carnegie Mellon University under the guidance of
J. D. Tygar, and spent three years during his Ph.D. degree at the
University of California at Berkeley. He received his B.Sc. degree in
Computer Engineering from the Swiss Federal Institute of Technology in
Lausanne (EPFL). Adrian's research revolves around building secure
systems -- in particular security of future Internet architectures.

View original page

12 November 16:15A day in the life of government cybersecurity / Ian Levy

Lecture Theatre 2, Computer Laboratory, William Gates Building

*Abstract:* The security of government systems and programmes is often attacked - both over the wire and in the press. During this presentation, we'll go into some of the issues around security in these systems and the unique challenges they bring.

*Bio:* Dr Ian Levy is technical director at CESG, the information assurance arm of GCHQ.

View original pageView slides

05 November 16:15Animals as Mobile Social Users / Tanya Berger-Wolf

Lecture Theatre 2, Computer Laboratory, William Gates Building

*Abstract:*
Recent advances in data collection technology, such as GPS and other
mobile sensors, high definition cameras, and UAVs have given biologists access to high spatial and temporal resolution data about animal populations. Many of the questions biologists are asking while trying to leverage those data are similar to questions being asked about mobile users. Why do animals go here rather than there? How does location influence activity and social interactions? How do social interactions influence activity and movement choices? How are movement decision being made in a group and individually?

While some of the methodology for answering those questions has been developed for understanding human behavior, animals offer the advantage of visible and trackable interactions and movements, simpler context and rules of behavior, and no privacy issues. I will present examples of the recent developments from the mobile world of animal populations, show some of the methodology we have developed for understanding their mobile social networks, and discuss the challenges for understanding these kinds of data, common to all animals, including humans.

*Bio:*
Dr. Tanya Berger-Wolf is an Associate Professor in the Department of Computer Science at the University of Illinois at Chicago, where she heads the Computational Population Biology Lab. Her research interests are in applications of computational techniques to problems in population biology of plants, animals, and humans, from genetics to social interactions. As a legitimate part of her research she gets to fly in a super-light airplane over a nature preserve in Kenya, taking a hyper-stereo video of zebra populations.
Dr. Berger-Wolf has received her Ph.D. in Computer Science from University of Illinois at Urbana-Champaign in 2002. After spending some time as a postdoctoral fellow working in computational phylogenetics and doing research in computational epidemiology, she returned to Illinois. She has received numerous awards for her research and mentoring, including the US National Science Foundation CAREER Award in 2008 and the UIC Mentor of the Year (2009) and Graduate Mentor (2012) awards.

View original pageView slides/notes

25 October 14:00In-depth crypto attacks: "It always takes two bugs" / Karsten Nohl, Security Research Labs

Lecture Theatre 2, Computer Laboratory, William Gates Building

*Abstract:*
Real-world cryptographic systems rarely meet academic
expectations, with most systems' being shown "insecure" at some point. At
the same time, our IT-driven world has not yet fallen apart, suggesting that
many protection mechanisms are "secure enough" for how they are employed.

This talk argues that hacks with real-world implications are mostly the
result of being able to break security assumptions on multiple design
layers. Protection designs that focus on a single security function and
neglect complimentary layers are hence more prone to compromise.

We look at three widely deployed protection systems from the cell phone,
automotive, and smart-card domains and show how technology abuse arises
from the combination of best-practice deviations on multiple design layers.

*Bio:*
Karsten Nohl is a cryptographer and security researcher with a degree
in Computer Engineering from UVa. Karsten likes to test security assumptions
in proprietary systems and typically breaks them.

View original pageView slides/notes

22 October 16:15Psychology of scams / David Modic, University of Cambridge

Lecture Theatre 2, Computer Laboratory, William Gates Building

*Abstract:*
There are some interesting specifics concerning crime. A lot of thought has
been put into perpetrating it and into preventing it. There are pedestrian
types of crime; those that require no particular skill or intelligence to
do. Crime, where potential victims have no say in the matter and are, so to
speak, innocent. There are also other types of crime. Crime where there
needs to be interaction between the potential victim and the offender.
Crimes, where the criminals need to think on their feet or carefully plan.
White collar crime or fraud as it is also called, is often in this latter
category.

Internet fraud specifically has recently received much attention in the
field of social sciences. Some researchers (e.g. Marian Fitzgerald from
Oxford) suggest that an overall perception of decline in crime numbers
should be attributed to offenders moving online. This is, broadly speaking,
an application of classic criminological theory to the phenomena of
cybercrime (i.e. the overall crime incidence rate stays roughly the same
over the years. So, if specific crime numbers decline, then there is bound
to be another type of crime that rises).

If we accept a certain level of victim facilitation in fraud, then the
mechanisms that may influence a potential victim become important. This talk
shows an impact of several social psychological factors on the level of
compliance with Internet scams (i.e. scam compliance). A scale of
Susceptibility to Persuasion was developed, validated and then applied to
the phenomena of scam compliance in two studies. Four reliable factors
contributing to susceptibility to persuasion emerged. The Susceptibility to
Persuasion scale was then used to predict overall lifetime (study 1) and
time-limited (study 2) scam compliance across the three stages of scams
(i.e. finding the scam plausible, responding to it and losing funds to the
scam), with lack of self-control emerging as the strongest predictor of
compliance across both studies.

*Bio:*
Born in Ljubljana, Slovenia in 1973. Finished high-school for computer
sciences in 1991. Enrolled into University of Ljubljana, Department for
Social Pedagogy in 1993. Received BSc (distinction) in 1999, with GPA of
9.0/10.00. Enrolled into MSc at the University of Ljubljana, Department for
Social Pedagogy in 2001, awarded MSc (distinction) in 2006. Applied for a
research position at the University of Exeter in 2007, was accepted in 2008.
In 2009 became an Exeter Graduate Fellow. HEA certified in 2010. Certified
Transactional Analysis Counsellor (CTAC). PhD in Psychology awarded in 2013
from University of Exeter. Currently a research associate at the Computer
Lab, here, at Cambridge.

My research interests broadly include psychology of Internet fraud and
topics connected to it. The topics include psychology of will /
self-control, social psychology, psychology of persuasion, decision making
processes, cyber-criminology, victimology and personality psychology.

The other area I am interested in is psychotherapy from the perspective of
the practitioner.

View original pageView slides/notes

15 October 16:15TLS Security - Where Do We Stand? / Kenny Paterson, Royal Holloway

Lecture Theatre 2, Computer Laboratory, William Gates Building

*Abstract:*
TLS is the de facto secure protocol of choice on the Internet. In this talk, I’ll give an overview of the state-of-the-art of TLS security, focusing mostly on the TLS Record Protocol which is responsible for providing the basic secure channel functionality in TLS. I’ll focus on recently-discovered vulnerabilities in the TLS specification and its cryptographic algorithms. These lead to plaintext recovery attacks against TLS-protected traffic. I will reflect on why the deployment of secure cryptography is seemingly so hard, and what the barriers are to adopting better approaches than the current techniques used in TLS.

*Bio:*
Professor Kenny Paterson obtained his BSc (Hons) in 1990 from the University of Glasgow and a PhD from the University of London in 1993, both in Mathematics. He was a Royal Society Fellow at the Swiss Federal Institute of Technology, Zurich, from 1993 to 1994 and a Lloyd's of London Tercentenary Foundation Fellow at the University of London from 1994 to 1996. He joined Hewlett-Packard Laboratories in 1996, becoming project manager in 1999. His technical work there involved him in international standards setting, internal consultancy on a wide range of mathematical and cryptographic subjects, and intellectual property generation. He also continued with more academic activities. In 2001, Kenny re-joined Royal Holloway as a Lecturer, becoming Reader in 2002 and Professor in 2004. He led the ISG's participation in the MoD/DoD-funded International Technology Alliance from 2006 to 2011. In March 2010, Kenny commenced a 5-year research fellowship funded by EPSRC on the topic of "Cryptography: Bridging Theory and Practice". He was Program Chair for Eurocrypt 2011 and serves on the editorial board of the Journal of Cryptology. Kenny's research interests span a wide range of topics in theoretical and applied cryptography, and information security. He has published more than 120 research papers on these topics.

View original page

11 June 16:15PHANTOM: A Parallel Architecture for Practical Oblivious Computation / Martin Maas, UC Berkeley

Lecture Theatre 2, Computer Laboratory, William Gates Building

*Abstract:*

Offloading computation to an untrusted datacenter can leak confidential information. Adversaries with physical access -- such as a malicious datacenter employee -- can probe the on-board interconnect to extract secret data from a processor. Tamper-proof computing platforms, where all code is executed within a physically sealed processor and all data outside the processor is encrypted, alleviate this problem only partially. The addresses of data that is accessed in DRAM are still visible in plain-text and represent a source of information leakage.

Our goal is to make a processor's memory accesses "oblivious" so that adversaries see a completely obfuscated address trace, and to build an oblivious platform that is practical today. To this end, we present PHANTOM (++), an oblivious memory controller that achieves high performance by aggressively exploiting memory parallelism and employing a carefully designed stall-free architecture. We have built an FPGA-based prototype on the Convey HC-2ex heterogeneous computing platform and solve several challenges in mapping an Oblivious RAM algorithm to FPGAs running at low frequencies without stalling the high bandwidth memory controllers.

(++) Parallel Hardware to make Applications Non-leaking Through Oblivious Memory

*Bio:*

Martin Maas is a second-year PhD student at UC Berkeley, working with Krste Asanović and John Kubiatowicz. His research interests include managed languages, computer architecture and operating systems. Before coming to Berkeley, Martin received his undergraduate degree from the University of Cambridge. He is currently completing an internship with Tim Harris at Oracle Labs.

View original page

24 May 14:00Distributed Electronic Rights in JavaScript / Mark S. Miller, Google

Lecture Theatre 2, Computer Laboratory, William Gates Building

*Abstract:*
Contracts enable mutually suspicious parties to cooperate safely through the exchange of rights. Smart contracts are programs whose behavior enforces the terms of the contract. This paper shows how such contracts can be specified elegantly and executed safely, given an appropriate distributed, secure, persistent, and ubiquitous computational fabric. JavaScript provides the
ubiquity but must be significantly extended to deal with the other aspects. The first part of this [talk] is a progress report on our efforts to turn JavaScript into this fabric. To demonstrate the suitability of this design, we describe an escrow exchange contract implemented in 42 lines of JavaScript code.

*Bio:*
Mark S. Miller is the main designer of the E and Caja object-capability programming languages, inventor of Miller Columns, a pioneer of agoric (market-based secure distributed) computing, an architect of the Xanadu hypertext publishing system, and a representative to the EcmaScript committee.

View original page

14 May 16:15Rendezvous: A search engine for binary code / Wei Ming Khoo, Computer Laboratory, University of Cambridge

Lecture Theatre 2, Computer Laboratory, William Gates Building

The problem of matching between binaries is important for software copyright enforcement as well as for identifying disclosed vulnerabilities in software. We present a search engine prototype called Rendezvous which enables indexing and searching for code in binary form. Rendezvous identifies binary code using a statistical model comprising instruction mnemonics, control flow sub-graphs and data constants which are simple to extract from a disassembly, yet normalising with respect to different compilers and optimisations. Experiments show that Rendezvous achieves F2 measures of 86.7% and 83.0% on the GNU C library compiled with different compiler optimisations and the GNU coreutils suite compiled with gcc and clang respectively. These two code bases together comprise more than one million lines of code. Rendezvous will bring significant changes to the way patch management and copyright enforcement is currently performed.

This is a practice talk for MSR'13.

View original page

08 May 16:15Pins, Tacks, and Slinks: Proposals for patching PKI on the web / Joseph Bonneau, Google

Lecture Theatre 2, Computer Laboratory, William Gates Building

*Abstract:*
The Certificate Authority (CA) system, added as an afterthought in the mid-1990s during initial development of SSL, has become a critical component for security on the web. Its faults have been become painfully clear over the past 2 years, with at least four known CA compromises which have enabled eavesdropping of real user's web traffic with grave consequences. This talk will survey the growing menagerie of proposals patching the CA system to mitigate such failures, including HPKP, Certificate Transparency, DANE, TACK, Perspectives, and s-links. It will lay out the challenges inherent in any attempt to efficiently and securely distribute security policy on a global scale and compare several potential combinations of protocols which could be paths forward.

*Bio:* Joseph Bonneau is an engineer at Google New York. He completed his PhD in 2012 at the Security Group in Cambridge under Ross Anderson on human authentication.

View original page

05 February 16:15A novel, efficient, scalable and easy-to-use cryptographic key management solution for wireless sensor networks / Dr Michael Healy, University of Limerick

Lecture Theatre 2, Computer Laboratory, William Gates Building

*Abstract:* Due to the sensitive nature of the data gathered by many wireless sensor networks (WSNs) it is becoming critical that this data be protected. However, due to the constrained nature of resources on sensor nodes, this is a difficult task. In particular, the use of asymmetric cryptographic operations, i.e. public key ciphers, often places an unjustifiable burden on a sensor node’s resources. As a result symmetric key ciphers are primarily used in WSNs. This introduces the difficult task of deploying and managing the required symmetric keys, which can be a major challenge even for
moderately sized networks. All currently available WSN specific solutions to this problem either have a very simple key utilisation strategy for the network, resulting in a low level of security overall, or else only provide limited connectivity. Additionally the majority of these solutions are overly complex, both conceptually and in terms of implementation, and so they are not used. This work identifies ten requirements for a WSN key management solution and then presents the design, implementation and evaluation of a solution, called µKM, which meets each of these requirements and
overcomes the problems of the existing schemes. This is achieved by relaxing the memory constraint in order to provide a large pool of keys to each node, a valid concession on newer generation sensor nodes. The evaluation of µKM shows that it is as efficient, if not more so, than the existing solutions in terms of energy consumption, network latency, and, to a lesser extent, program memory and RAM requirements. It also comes out well ahead of the alternatives in link key establishment overheads due to the fact that it requires no prior and/or additional communication in order to set up individual link keys between any two nodes.

*Bio:* Dr. Michael Healy is the lead embedded systems software developer for Shimmer Research, a supplier of wireless sensor network technology primarily focused on health and fitness applications. He received a BEng
Degree in Computer Engineering from the University of Limerick in 2005 and was granted a PhD from the same institution in 2012 for work on securing wireless sensor networks. Prior to Shimmer Research Michael
worked as an R&D applications engineer in Intel's digital health group.

View original page

22 January 16:15CERB Banking: How to secure online banking and keep the users happy? / Pawel Jakub Dawidek

Lecture Theatre 2, Computer Laboratory, William Gates Building

*Abstract:*
CERB Banking is an authentication system used to secure authentication to online banking sites as well as to sign transactions. The main authentication method is mobile application, which generates one-time
passwords and confirmation codes to sign transactions.

To our knowledge CERB Banking deployed in 2008 for Eurobank in Poland was the first such solution in the world: a mobile application that was able to protect users against Man-in-the-Browser attacks by presenting
transaction details and sign transactions.

The talk will provide in-depth analysis of the system and mobile application security, including details not disclosed anywhere else.

*Bio:*
Pawel Jakub Dawidek is coowner of the WHEEL Systems company and the main architect of the CERB authentication system. Pawel is also long time
FreeBSD committer working mostly on security- and storage-related aspects of the system.

View original page

15 January 16:15Protecting your website from hackers / Ben Mathews, Facebook

Lecture Theatre 2, Computer Laboratory, William Gates Building

I will give a modified version of the talk we give our new engineers on how not to write security holes.
This may be a little bit closer to Zend's talk. I will talk more openly about some of our solutions to a variety of web security issues where an outside hacker is typically trying to get control of your website. Among other things, I will cover:
a. XSS: XHP; Alternatives to innerHTML in JavaScript; Automatic detection of XSS holes.
b. SQL injection: Our abstracted graph data store (which avoids the need for SQL); printf()-style SQL functions
c. URL injection: Our URI class for building URLs
d. Shell injection: Our printf()-style functions for running shell
commands
e. CSRF: Generating CSRF tokens and checking them automatically on all POST
requests; The importance of a good crypto library
f. Brute-force attacks: Also the importance of a good crypto library.

2012

View original page

04 December 16:15Protecting websites from social engineering attacks against your users / Christopher Palow, Facebook

Lecture Theatre 2, Computer Laboratory, William Gates Building

I will talk about what phishing, fake accounts, self xss, malware toolbars, .exe malware, and shared secret stealing are and give some examples and only a limited number of Facebook's countermeasures against
such attacks. These are the types of attacks where the hacker doesn't gain control of your website, but only control of a user's account. Unfortunately,
Facebook has to keep some of our protections secret as they'd lose effectiveness if they were known. I will talk about the threats in details, the solutions will be more light weight.

View original page

27 November 16:15Who's next? Identifying risk factors for subjects of targeted attacks / Martin Lee (Symantec)

Lecture Theatre 2, Computer Laboratory, William Gates Building

Malware-containing emails can be sent to anyone. Single malware variants can be sent to tens of thousands of recipients without distinction. However, a small proportion of email malware is sent in low copy number to a small set of recipients that have apparently been specifically selected by the attacker. These targeted attacks are challenging to detect and if successful, may be particularly damaging for the recipient. The vast majority of Internet users will never be sent a targeted attack. The few users to which such attacks are sent, presumably possess features that have brought them to the attention of attackers, and have caused them to be selected for attack. Applying epidemiological techniques to calculate the odds ratio for features of malware recipients, both targeted and non-targeted, allows the identification of putative factors that are associated with targeted attack recipients. In this paper we show that it is possible to identify putative risk factors that are associated with individuals subjected to targeted attacks, by considering the threat akin to a public health issue. These risk factors may be used to identify those at risk of being subject to future targeted attacks, so that these individuals can take additional steps to secure their systems and data.

View original page

20 November 16:15An Updated Threat Model for Security Ceremonies / Jean Martina (Federal University of Santa Catarina / Brazil)

Lecture Theatre 2, Computer Laboratory, William Gates Building

Since Needham and Schroeder introduced the idea of an active attacker, a lot of research has been made in the protocol design and analysis area in order to verify protocols' claims against this type of attacker. Nowadays, the Dolev-Yao threat model is the most widely accepted attacker model in the analysis of security protocols. Consequently, there are several security protocols considered secure against an attacker under Dolev-Yao's assumptions. With the introduction of the concept of ceremonies, which extends protocol design and analysis to include human peers, we can potentially find and solve security flaws that were previously not detectable. In this presentation, we discuss that, even though Dolev-Yao's threat model can represent the most powerful attacker possible in a ceremony, the attacker in this model is not realistic in certain scenarios, specially those related to the human peers. We propose a dynamic threat model that can be adjusted according to each ceremony, and consequently adapt the model and the ceremony analysis to realistic scenarios without degrading security and improving usability.

View original page

14 November 14:15Structural executable comparison, malware classification, and collaborative binary analysis - the formerly-zynamics tools at Google / Thomas Dullien, Google

Lecture Theatre 1, Computer Laboratory

Recent years have seen an explosion in the industry adoption of
reverse engineering
for security purposes. Between the late 90's and today, a niche
endeavor turned into industry
practice - both for the analysis of malicious software and for the
security review of closed-source
software components. In 2011, Google acquired zynamics GmbH, a small
company focused on
developing software for (security-minded) reverse engineers. This talk
will give an overview of the
different areas in which zynamics worked prior to joining Google, and
some of the directions in
which we're moving now.

On the technical level, the talk will give an overview over our
structural / graph-centric algorithms
for executable comparison, how we used these algorithms for malware
classification and byte-signature
generation, and over our reverse-engineering IDE which permits fully
collaborative disassembly
analysis for teams of reverse engineers.

View original page

23 October 16:15Exploring compartmentalisation hypotheses with SOAAP / Khilan Gudka (University of Cambridge)

Lecture Theatre 2, Computer Laboratory, William Gates Building

Application compartmentalisation decomposes software into sandboxed components in order to mitigate security vulnerabilities, and has proven effective in limiting the impact of compromise. However, experience has shown that adapting existing C-language software is difficult, often leading to problems with correctness, performance, complexity, and most critically, security. Security-Oriented Analysis of Application Programs (SOAAP) is an in-progress research project into new semi-automated techniques to support compartmentalisation. SOAAP employs a variety of static and dynamic approaches, driven by source code annotations termed compartmentalisation hypotheses, to help programmers evaluate strategies for compartmentalising existing software.

View original page

18 October 16:00From geek-dream to mass-market: Will privacy-preserving technologies ever be adopted? / Hamed Haddadi (Queen Mary, University of London)

FW26, Computer Laboratory, William Gates Builiding

We have been working on privacy preserving profiling, advertising, data mining, and user monitoring systems for a decade now, but we are yet to see a real world deployment. In this talk I will discuss some of the players in this ecosystem, their strengths and strategies, and the shortcomings of computer science solutions in this space. The talk is based on a number of recent papers and studies.

Bio: http://www.eecs.qmul.ac.uk
http://www.eecs.qmul.ac.uk/%7Ehamed/bio.txt

View original page

16 October 16:15Analysis of FileVault 2: Apple's full disk encryption scheme / Omar Choudary (University of Cambridge)

Lecture Theatre 2, Computer Laboratory, William Gates Building

With the launch of Mac OS X 10.7 (Lion), Apple has introduced a volume encryption mechanism known as FileVault 2. Apple only disclosed marketing aspects of the closed-source software, e.g. its use of the AES-XTS tweakable encryption, but a publicly available security evaluation and detailed description was unavailable until recently.

We have performed an extensive analysis of FileVault 2 and we have been able to find all the algorithms and parameters needed to successfully read an encrypted volume. This allows us to perform forensic investigations on encrypted volumes using our own tools.

In this presentation I will present the architecture of FileVault 2, giving details of the key derivation, encryption process and metadata structures needed to perform the volume decryption. I will also comment on the security of the system and the analysis we have performed.

Besides the analysis of the system, we have also built a library that can mount a volume encrypted with FileVault 2. As a contribution to the research and forensic communities we have made this library open source.

The paper is available at
http://eprint.iacr.org/2012/374

View original page

12 October 10:00Dynamically Enforcing Knowledge-based Security Policies / Michael Hicks, University of Maryland

Small lecture theatre, Microsoft Research Ltd, 7 J J Thomson Avenue (Off Madingley Road), Cambridge

Knowledge-based security policies are those which specify a threshold on an adversary's knowledge about secret data. The data owner initially estimates what an adversary might know about his secret, and with each interaction, defined in terms of a query made by the adversary over his secret data, he updates his estimate. If a query response could lead the adversary's knowledge to exceed a given threshold, the query is denied.

In this talk I will discuss how we implement query analysis and belief tracking via abstract interpretation using a novel probabilistic polyhedral domain, whose design permits trading off precision with performance while ensuring estimates of a querier's knowledge are sound. I will present examples of our technique that might apply to personal data. I will also show how our technique can be generalized to reason about knowledge increase in secure multiparty computation (SMC), which is a protocol that allows a set of mutually distrusting parties to compute a function f of their private inputs while revealing nothing about their inputs beyond what is implied by the result. Our technique permits reasoning about what can be inferred by each participant from the result. Finally, I will sketch how we are working to apply our technique to securing sensor data streams.

This is joint work with Piotr Mardziel (Maryland), Jonathan Katz (Maryland), Stephen Magill (formerly at Maryland), and Mudhakar Srivatsa (IBM). For more details see our papers at CSF'11 and PLAS'12:

http://www.cs.umd.edu/~mwh/papers/mardziel11belief.html
http://www.cs.umd.edu/~mwh/papers/mardziel12smc.html

View original page

09 October 16:15Aurasium: Practical Policy Enforcement for Android Applications / Rubin Xu (University of Cambridge)

Lecture Theatre 2, Computer Laboratory, William Gates Building

With the increasing popularity and growing market share of Google's mobile platform Android, it has become the top target of latest mobile malware. Previous work on Android security and privacy control produced solutions that require modification to the operating system itself. This requires the user to root his phone to install custom firmware due to software, hardware, and policy choices by Google, the phone manufacturers, and cellular providers. There is no guarantee that these solutions will ever make their way to consumers unless Google implements them in the main Android OS source code repository.

We developed a novel approach named Aurasium that bypasses the need to change the firmware. We automatically rewrite arbitrary apps by attaching interposition code to closely watch the application's behaviour for security and privacy violations, such as attempts to retrieve a user's sensitive information, send SMS covertly to premium numbers, or access malicious IP addresses. Aurasium can also detect and prevent cases of privilege escalation attacks. Experiments show that we can apply Aurasium to a large corpus of benign and malicious applications with over 99% success rate.

View original page

04 October 17:00Technology Risk Management - Achilles Heels, Myths and Mirages / Tony Chew (Monetary Authority of Singapore)

Lecture Theatre 2, Computer Laboratory, William Gates Building

The technology landscape is constantly shifting, evolving and advancing. Major technological disruptions through innovations and new discoveries come around every decade or so. This decade is no different. The current technological trends emerging are related to mobile banking and payments, cloud computing, big data analytics, core banking systems upgrades, migration to chip card, dynamic authentication and IT security. Trust and confidence, safety and soundness should be the cornerstone of these developments and the initiatives taken in the financial industry.

The proliferation of technology in banking is pervasive and far-reaching. Technology and customer demand are driving a huge transformation as to how banking is done. Bank senior management will have to navigate the murky waters of over-hyped and under-delivered performance of some new technologies. IT governance and technology risk management play a very important role here. Regulators will need to see that due diligence practices and safety and soundness requirements are not impeded, impaired or undermined when new technologies are deployed.

Speaker's Bio

Tony joined the Monetary Authority of Singapore in 1999 to head up the Technology Risk Supervision Division. His responsibilities included the development of strategies, programmes, standards and guidelines for the purpose of regulating and supervising financial institutions in respect of technology risk management requirements and information security processes. Tony has held the appointment of Director (Specialist Advisor) for information technology security and risk management since 1 May 2011. He has been actively engaged in conducting seminars and workshops on banking systems security and technology risk management in America, Asia, Australia, China and Europe.

View original page

01 October 15:00Explorations of Science in Cyber Security / Greg Shannon (CERT and Carnegie Mellon University)

Lecture Theatre 2, Computer Laboratory, William Gates Building

A scientific perspective on cyber security (a “science of cyber security”) is growing as a sound and respected area of research. In this talk we discuss how an empirical perspective enhances our understanding of how to create efficiently secure cyber infrastructure. In particular we discuss four questions that reflect “delusions” that we at the CERT Program see as endemic in the practice of cyber security.

# If code correctness is improving, why do exploits continue to rely on known avoidable programming mistakes?
# If policies are effective, why do unimplemented or ineffective policies continue to be an enabling element of major incidents?
# If monitoring provides useful situational awareness, why do so many significant intrusions remain undetected for weeks? months? years?
# If proficient response capabilities exist, why are even sophisticated victims challenged to quickly and effectively investigate, mitigate and recover?

We discuss our recent work in synthetic data generation and other work at CERT that strives to take sound scientific approaches to understanding and solving the challenges of creating and operation efficiently secure cyber infrastructure.

Some of the publicly available cyber security information and tools from the CERT Program include:

Secure Coding, http://www.cert.org/secureRcoding

Resiliency, http://www.cert.org/resilience

Cyber Training, http://www.cert.org/work/training.html

Insider Threats, http://www.cert.org/insider_threat

Forensics, http://www.cert.org/forensics

Network Monitoring, http://tools.netsa.cert.org

Fuzz Testing, http://www.cert.org/download/bff

Additional information is available at www.cert.org and in the 2010 CERT Research Report, www.cert.org/research/2010researchRreport.pdf.

View original page

27 September 16:15Protecting Distributed Applications Through Software Diversity and Renewability / Christian Collberg, University of Arizona

Lecture Theatre 2, Computer Laboratory, William Gates Building

Remote Man-at-the-end (R-MATE) attacks occur in distributed applications where an adversary has physical access to an untrusted client device and can obtain an advantage from inspecting, reverse engineering, or tampering with the hardware itself or the software it contains.

In this talk we give an overview of R-MATE scenarios and present a system for protecting against attacks on untrusted clients. In our system the trusted server overwhelms the client's analytical abilities by continuously and automatically generating and pushing to him diverse variants of the client code. The diversity subsystem employs a set of primitive code transformations that provide temporal, spatial, and semantic diversity in order to generate an ever-changing attack target for the adversary, making tampering difficult without this being detected by the server.

Speaker's Bio

Christian Collberg received a BSc in Computer Science and Numerical Analysis and a Ph.D. in Computer Science from Lund University, Sweden. He is currently an Associate Professor in the Department of Computer Science at the University of Arizona and has also worked at the University of Auckland, New Zealand, and holds a position at the Chinese Academy of Sciences in Beijing, China.

Prof. Collberg is the author of the first comprehensive textbook on software protection, "Surreptitious Software: Obfuscation, Watermarking, and Tamperproofing for Software Protection," published in Addison-Wesley's computer security series.

Prof. Collberg is a leading researcher in the intellectual property protection of software, and also maintains an interest in compiler and programming language research. In his spare time he writes songs, sings, and plays guitar for The Undecidables and hopes one day to finish up his Great Swedish Novel.

View original page

31 July 16:15Bromium: Task isolation through hardware-assisted virtualization / Ian Pratt, Bromium

Lecture Theatre 2, Computer Laboratory, William Gates Building

Software running on modern client systems has become too large and complex to secure via conventional means, making it an easy target for malware. This talk discusses how hardware-assisted virtualization can be used to retrofit robust isolation and protection to client systems, resulting in a much more defensible platform with much greater resistance to malware and user error, while operating transparently to the end user.

The talk will examine the architectural progression which led from the development of XenClient XT (an MILS system designed for the US intelligence and defence communities) to the Bromium platform, that draws on much of the same technology but is designed for a far more
mainstream use case.


About the speaker:

Ian Pratt leads the product team at Bromium, a startup focussed on making computer systems more trustworthy. He was formerly a member of faculty at the University of Cambridge Computer Laboratory, where he led the systems research group before leaving to found XenSource, which was acquired by Citrix in 2007. He co-founded Bromium early last year, which now employs over 40 researchers and developers across its offices in Cambridge UK and Cupertino, California.

View original page

10 July 16:15Security Analysis of Industrial Control Systems / Arthur Gervais (Aalto University)

Lecture Theatre 2, Computer Laboratory, William Gates Building

Industrial Control Systems (ICS), often referred to as SCADA (Supervisory Control And Data Acquisition) Systems, have gained the increasing attention of IT-Security researchers. This talk introduces the terminology and background of ICS and exposes the reasons why it is difficult to secure ICS. Moreover, the talk will present security analysis guidelines for ICS devices. These guidelines can be applied to many ICS devices and are mostly vendor-independent. Furthermore, based on Scapy, a Modbus/TCP interactive packet manipulation program was developed for assessing critical infrastructures and ICS devices.

In the second half of the talk, I will describe a security analysis performed on a real device - an ICS democase containing current products in use in ICS. Besides known security issues, the analysis shows how the data visualized by the Human Machine Interface (HMI) can be altered and modified without limit. Secondly, physical values read by sensors, such as temperatures, can be altered within the Programmable Logic Controller (PLC). Thirdly, input validation also represent critical security issues in the ICS world. Lastly, existing security solutions for securing current ICS are briefly presented.

View original page

27 June 11:00Code-Level Formal Verification for Large Real Systems / Mark Staples (NICTA)

SS03, Computer Lab, William Gates Building

NICTA has completed the machine-checked, code-level formal verification of the full functional correctness of the seL4 operating system microkernel. This outcome confirms that it is feasible to perform this kind of detailed formal verification in real software engineering projects.
However, although seL4 is complex, it is not a very large system (8700 lines of C code).

Our next broad challenge is to make it feasible to complete the code-level formal verification of key security and safety properties of very large highly-critical software-intensive systems. We expect that seL4 will provide a foundation for this. In this talk I will give an overview of three areas of recent ongoing research that I am involved with that help to address this broad challenge.

The first area is on better understanding of the software process and management for large-scale formal methods projects. The second area is on approaches to define and analyse software architectures
for large trustworthy systems built using trusted and untrusted
components. The final area is more methodological and philosophical:
how should we establish the empirical validity of the formal models
used in formal verification?

Bio: Mark Staples is a Principal Researcher in the Software Systems Research Group at NICTA, and a Conjoint Senior Lecturer at the University of New South Wales. He is conducting research at the borders between software engineering, formal methods, and systems.

Earlier at NICTA he was a member of, then led, NICTA's empirical software engineering group. He was the founding leader of the Fraunhofer Project Centre in Transport and Logistics at NICTA, a strategic collaboration between NICTA and Fraunhofer IESE. In conjunction with Fraunhofer IESE and SAP Research, he led the creation of the Future Logistics Living Lab facility and industry network.

Prior to joining NICTA, he worked in the software industry for several years, first on a safety-critical SCADA system, and then on a business-critical web payments infrastructure product. He completed undergraduate degrees in computer science and cognitive science at the University of Queensland, and a PhD on theorem proving and formal methods at the University of Cambridge.

View original page

22 June 11:00Towards Trustworthy Embedded Systems / Gernot Heiser (University of New South Wales/NICTA)

FW26, Computer Laboratory, William Gates Builiding

Embedded systems are increasingly used in circumstances where people's lives or valuable assets are at stake, hence they should be trustworthy - safe, secure, reliable. True trustworthiness can only be achieved through mathematical proof of the relevant properties. Yet, real-world software systems are far too complex to make their formal verification tractable in the foreseeable future. The Trustworthy Systems project at NICTA has formally proved the functional correctness as well as other security-relevant properties of the seL4 microkernel. This talk will provide an overview of the principles underlying seL4, and the approach taken in its design, implementation and formal verification. It will also discuss on-going activities and our strategy for achieving the ultimate goal of system-wide security guarantees.

View original page

07 June 16:00Lock Inference in the Presence of Large Libraries / Khilan Gudka (University of Cambridge)

FW26, Computer Laboratory, William Gates Builiding

Atomic sections can be implemented using lock inference. For lock inference
to be practically useful, it is crucial that large libraries be analysed.
However, libraries are challenging for static analysis, due to their
cyclomatic complexity.

Existing approaches either ignore libraries, require library implementers to
annotate which locks to take or only consider accesses performed upto one
level deep in library call chains. Thus, some library accesses may go
unprotected, leading to atomicity violations that atomic sections are
supposed to eliminate.

We present a lock inference approach for Java that analyses library methods
in full. We achieve this by (i) formulating lock inference as an
Interprocedural Distributive Environment dataflow problem, (ii) using a
graph representation for summary information and (iii) applying a number of
optimisations to our implementation to reduce space-time requirements and
locks inferred. We demonstrate the scalability of our approach by analysing
the entire GNU Classpath library comprising 122KLOC.

View original page

31 May 16:00Attacks and defenses in decentralised botnets / Shishir Nagaraja (University of Birmingham)

SS03, Computer Lab, William Gates Building

As corporations, agencies, and individuals continue to invest in national infrastructure trusting it to withstand cyber-attacks, it is important to ensure that the this trust is warranted. In this talk, I will present ISP level countermeasures that localise bots based on the unique communication patterns arising from their overlay topologies used for command and control. I will also present schemes that allow ISPs to cooperatively detect botnet attacks and other network anomalies without leaking private traffic information. Experimental results on synthetic topologies embedded within Internet traffic traces from an ISP's backbone network indicate that our techniques (i) can localize the majority of bots with low false positive rate, (ii) are resilient to the partial visibility arising from partial deployment of monitoring systems, and measurement inaccuracies arising from partial visibility and dynamics of background traffic, and (iii) are scalable enough to show good promise as a key element of a wider network anomaly detection framework.

Motivation:
http://www.guardian.co.uk/technology/blog/2009/mar/29/dalai-lama-china-malware
The snooping dragon: Social malware surveillance of the Tibetan movement http://www.cl.cam.ac.uk/techreports/UCAM-CL-TR-746.pdf

Technical paper:
http://www.usenix.org/event/sec10/tech/full_papers/Nagaraja.pdf

Bio: Shishir Nagaraja is a researcher in network security and privacy. He holds the position of a Lecturer at the University of Birmingham, as well as concurrent appointments as Adjunct Professor at the University of Illinois at Urbana-Champaign, USA and Assistant Professor at IIITD, India. He holds a PhD in Computer Security from the University of Cambridge. He has worked in the software industry for several years as a Software Engineer at Novell Bangalore. He holds several patents in the area of trust and security.

View original page

29 May 16:15Some recent results on TLS and DTLS / Kenny Paterson (Royal Holloway)

Lecture Theatre 2, Computer Laboratory, William Gates Building

TLS is the de facto protocol of choice for securing Internet communications, while DTLS is an increasingly important variant of TLS that was designed for use in lightweight applications. In this talk, I will provide an overview of some recent results - both positive and negative - about the security of the TLS and DTLS protocols.

View original page

17 May 16:00Facebook and Privacy: The Balancing Act of Personality, Gender, and Relationship Currency / Daniele Quercia (University of Cambridge)

FW26, Computer Laboratory, William Gates Builiding

Social media profiles are telling examples of the everyday need for disclosure and concealment. The balance between concealment and disclosure varies across individuals, and personality traits might partly explain this variability. Experimental findings on the relationship between information disclosure and personality have been so far inconsistent. We thus study this relationship anew with 1,313 Facebook users in the United States using two personality tests: the big five personality test and the self-monitoring test. We model the process of information disclosure in a principled way using Item Response Theory and correlate the resulting user disclosure scores with personality traits. We find a correlation with the trait of Openness and observe gender effects, in that, men and women share equal amount of private information, but men tend to make it more publicly available, well beyond their social circles. Interestingly, geographic (e.g., residence, hometown) and work-related information is used as relationship currency, in that, it is selectively shared with social contacts and is rarely shared with the Facebook community at large.

link: http://www.cl.cam.ac.uk/~dq209/publications/quercia12privacy.pdf

View original page

09 May 14:15CANCELLED: Structural executable comparison, malware classification, and collaborative binary analysis - the formerly-zynamics tools at Google / Thomas Dullien, Google

Lecture Theatre 1, Computer Laboratory

Recent years have seen an explosion in the industry adoption of
reverse engineering
for security purposes. Between the late 90's and today, a niche
endeavor turned into industry
practice - both for the analysis of malicious software and for the
security review of closed-source
software components. In 2011, Google acquired zynamics GmbH, a small
company focused on
developing software for (security-minded) reverse engineers. This talk
will give an overview of the
different areas in which zynamics worked prior to joining Google, and
some of the directions in
which we're moving now.

On the technical level, the talk will give an overview over our
structural / graph-centric algorithms
for executable comparison, how we used these algorithms for malware
classification and byte-signature
generation, and over our reverse-engineering IDE which permits fully
collaborative disassembly
analysis for teams of reverse engineers.

View original page

08 May 16:15Building Bankomat: Cash dispensers and the development of on-line, real-time networks in Britain and Sweden, c.1965-1985 / Bernardo Batiz-Lazo (Bangor University)

Lecture Theatre 2, Computer Laboratory, William Gates Building

This talk explores the technological choices made at the dawn of the massification of retail finance and specifically how ideas that computers could enable a cash-free society appeared concurrently to cash dispenser technology. To describe and analyse the development of electronic banking and its entanglement with wider historical processes, we document how the deployment of cash dispenser networks and later on a fleet of automated teller machines (ATM), interweaved with the adoption of on-line real-time (OLRT) computing in Sweden and the UK. British savings banks started their computerisation rather ‘late’ and benefited from adopting ‘tried and tested’ technology. Meanwhile, Swedish savings banks spearheaded technological change in Europe. In documenting the sequence of events in the networking of Swedish and British banking, we depart from the predominant view that holds the development of OLRT in a single move. Instead we propose there are specific conditions inside banking organisations requiring to consider on-line (OL) or asynchronous and on-line real-time (OLRT) or synchronous communication as two distinct stages of development in the adoption of computer technology. As a result, we show how delivering on a cashless society proved more difficult than anticipated.

View original page

01 May 16:15New evidence on consumers’ willingness to pay for privacy / Sören Preibusch (University of Cambridge)

Lecture Theatre 2, Computer Laboratory, William Gates Building

How do the differences in data collection and processing between competing online retailers influence consumers’ purchasing decisions? Are online shoppers willing to pay extra for better privacy and can companies monetise good privacy practices? I will report on evidence from the largest experiment to date into behavioural privacy economics.

View original page

23 April 14:00Rekeyable Ideal Cipher from a Few Random Oracles / Elena Andreeva, K.U. Leuven

Small lecture theatre, Microsoft Research Ltd, 7 J J Thomson Avenue (Off Madingley Road), Cambridge

Reducing the security of a complex construction to that of a simpler primitive is one of the central methods of cryptography.
Rather recently, in the domain of cryptographic hashing, such constructions as Merkle-Damgard and sponge based on a fixed-length random oracle (compression function or permutation) have been proven indifferentiable from a finite-length random oracle. Moreover, Feistel based on a fixed-length random oracle has been shown indifferentiable from a wider random oracle. In this talk we address the fundamental question of constructing an ideal cipher (consisting of exponentially many random oracles) from a small number of fixed-length random oracles.

In this talk, we show that the multiple Even-Mansour construction with
4 rounds, randomly drawn fixed underlying permutations and a bijective key schedule, is indifferentiable from ideal cipher. Our proof is accompanied by an efficient differentiability attack on multiple Even-Mansour with 3 rounds.

Practically speaking, we provide a construction of an ideal cipher as a set of exponentially many permutations from just as few as 4 permutations. On the theoretical side, this is result confirms the equivalence between ideal cipher and random oracle models.

View original page

18 April 10:30Confining the Ghost in the Machine: Using Types to Secure JavaScript Sandboxing / Shriram Krishnamurthi, Brown University

Large lecture theatre, Microsoft Research Ltd, 7 J J Thomson Avenue (Off Madingley Road), Cambridge

The commercial Web depends on combining content, especially advertisements, from sites that do not trust one another. Because this content can contain malicious code, several corporations and researchers have designed JavaScript sandboxing techniques (e.g., ADsafe, Caja, and Facebook JavaScript). These sandboxes depend on static restrictions, transformations, and libraries that perform dynamic checks. How can we be sure that they work?

We tackle the problem of proving the security of these sandboxes. Our technique depends on creating specialized types to characterize the properties of the sandboxes, exploiting the structure of the checks contained in the libraries. The resulting checkers work on actual JavaScript code that is effectively unaltered; I will focus on our application to Yahoo!'s ADsafe. We establish soundness using our semantics for JavaScript, which has been tested for conformity against real implementations.

Joint work with Arjun Guha and Joe Politz.

View original page

17 April 16:00Efficient Cryptography for the Next Generation Secure Cloud / Alptekin Küpçü, Koç University

Primrose Room, Microsoft Research Ltd, 7 J J Thomson Avenue (Off Madingley Road), Cambridge

Peer-to-peer (P2P) systems, and client-server type storage and computation outsourcing constitute some of the major applications that the next generation cloud schemes will address. Since these applications are just emerging, it is the perfect time to design them with security and privacy in mind. Furthermore, considering the high-churn characteristics of such systems, the cryptographic protocols employed must be efficient and scalable.

In this talk, I will focus on an efficient and scalable fair exchange protocol that can be used for exchanging files between participants of a P2P file sharing system. It has been shown that fair exchange cannot be done without a trusted third party (called the Arbiter). Yet, even with a trusted Arbiter, it is still non-trivial to come up with an efficient solution, especially one that can be used in a P2P file sharing system with a high volume of data exchanged. Our protocol is optimistic, removing the need for the Arbiter's involvement unless a dispute occurs. While the previous solutions employ costly cryptographic primitives for every file or block exchanged, our protocol employs them only once per peer, therefore achieving O(n) efficiency improvement when n blocks are exchanged between two peers.
In practice, this corresponds to one-two orders of magnitude improvement in terms of both computation and communication (42 minutes vs. 40 seconds, 225 MB vs. 1.8 MB). Thus, for the first time, a provably secure (and privacy respecting when payments are made using
e-cash) fair exchange protocol is being used in real bartering applications (e.g., BitTorrent) without sacrificing performance.

Finally, if time permits, I will briefly mention some of our other results on cloud security including ways to securely outsource computation and storage to untrusted entities, official arbitration in the cloud, impossibility results on distributing the Arbiter, keeping the user passwords safe, and the Brownie Cashlib cryptographic library including ZKPDL zero-knowledge proof description language we have developed. I will also be available to talk on these other projects after the presentation.

View original page

05 April 09:30The science of guessing / Joseph Bonneau (Cambridge University)

Small lecture theatre, Microsoft Research Ltd, 7 J J Thomson Avenue (Off Madingley Road), Cambridge

Despite decades of efforts to improve authentication, the world still relies heavily on secrets chosen (and memorized) by humans: passwords, PINs, personal knowledge questions and the occasional graphical password scheme. While everybody think these are possible for attackers to guess, our understanding of just how difficult is vague. Are passwords or PINs harder and by how much? How can we accurately the difficulty of guessing passwords chosen by older users to those chosen by younger users, or those chosen by English speakers to those chosen by Spanish speakers? This talk will address these questions, presenting the speaker's dissertation research and upcoming IEEE Security & Privacy Symposium publication. To do so, the talk will introduce the right statistical metrics for measuring guessing resistance, discuss how to collect large password datasets in a privacy-friendly and secure manner, and discuss some findings from analyzing 70 M passwords from Yahoo! users, perhaps the largest corpus ever studied.

View original page

02 April 16:00Towards Statistical Queries over Distributed Private User Data / Paul Francis (MPI-SWS)

FW26, Computer Laboratory, William Gates Builiding

Today the method du jour for statistical analysis of user behavior is to gather lots of user data, anonymize it (more-or-less), and then analyze that data. The need for doing statistical analysis drives many companies to gather large amounts of user data, often without the users'
awareness. My research group at MPI-SWS has been exploring approaches for doing statistical analysis without gathering user data. Rather, user data is kept on user devices, and queries are pushed to these
devices. The resulting answers are anonymized and fuzzed such that 1)
no single party can associate data with individual users, and 2) the aggregate answers are differentially private. In this talk, I will present a general approach that we will present in NSDI this year. I will outline the shortcomings of this approach, and follow with some enhancements that scale better in specific applications domains, namely web analytics and behavioral advertising.

Bio: Paul Francis is a tenured faculty at the Max Planck Institute for Software
Systems in Germany. Paul has held research positions at Cornell
University, ACIRI, NTT Software Labs, Bellcore,and MITRE, and was Chief
Scientist at two Silicon Valley startups. Paul’s research centers around
routing and addressing problems in the Internet and P2P networks. Paul’s
innovations include NAT, shared-tree multicast, the first P2P multicast
system, the first DHT (as part of landmark routing), and Virtual
Aggregation. Recently Paul has become interested in designing advertising
systems that protect user privacy while allowing for effective targeting.

View original page

28 March 14:00Non-Interactive Verifiable Computation / Bryan Parno, Microsoft Research

Small lecture theatre, Microsoft Research Ltd, 7 J J Thomson Avenue (Off Madingley Road), Cambridge

The growth of ``cloud computing'' and the proliferation of mobile devices contribute to a desire to outsource computing from a client device to an online service. However, in these applications, how can the client verify that the result returned is correct, without redoing the computation herself? We formalize this setting by introducing the notion of verifiable computation, and we provide a protocol that achieves asymptotically optimal performance (amortized over multiple inputs). We then extend the definition of verifiable computation in two important directions: public delegation and public verifiability, which have important applications in many practical delegation scenarios. To achieve these new properties, we establish an important (and somewhat surprising) connection between verifiable computation and attribute-based encryption. Finally, we introduce a new characterization of NP that lends itself to very efficient cryptographic applications, including verifiable computation, succinct non-interactive arguments, and non-interactive zero knowledge proofs.

Bryan Parno is a researcher in the Security and Privacy Group within Microsoft Research, Redmond. His interests span a broad range of security topics, including network and system security, applied cryptography, usable security, and data privacy. Currently, he is investigating next-generation application models, privacy-preserving online services, and cryptographic techniques for securely outsourcing computation. He completed his PhD at Carnegie Mellon University, where he was advised by Adrian Perrig. His dissertation received the 2010 ACM Doctoral Dissertation Award, and he recently co-authored the book “Bootstrapping Trust in Modern Computers”.

View original page

20 March 16:15Trustworthy Medical Device Software / Kevin Fu (University of Massachusetts Amherst)

Lecture Theatre 2, Computer Laboratory, William Gates Building

The U.S. Institute of Medicine commissioned my 2011 report on the role of trustworthy software in the context of U.S. medical device regulation. This talk will provide a glimpse into the risks, benefits, and regulatory issues for innovation of trustworthy medical device software.

Today, it would be difficult to find medical device technology that does not critically depend on computer software. The technology enables patients to lead more normal and healthy lives. However, medical devices that rely on software (e.g., drug infusion pumps, linear accelerators) continue to injure or kill patients in preventable ways--despite the lessons learned from the tragic radiation incidents of the Therac-25 era. The lack of trustworthy medical device software leads to shortfalls in properties such as safety, effectiveness, dependability, reliability, usability, security, and privacy.

Come learn a bit about the science, technology, and policy that shapes medical device software.

Bio:

Kevin Fu is an Associate Professor of Computer Science and adjunct Associate Professor of Electrical & Computer Engineering at the University of Massachusetts Amherst. Prof. Fu makes embedded computer systems smarter: better security and safety, reduced energy consumption, faster performance. His most recent contributions on trustworthy medical devices and computational RFIDs appear in computer science and medical conferences and journals. The research is featured in critical articles by the NYT, WSJ, and NPR.

Prof. Fu served as a visiting scientist at the Food & Drug Administration, the Beth Israel Deaconess Medical Center of Harvard Medical School, and MIT CSAIL. He is a member of the NIST Information Security and Privacy Advisory Board. Prof. Fu received a Sloan Research Fellowship, NSF CAREER award, and best paper awards from various academic silos of computing. He was named MIT Technology Review TR35 Innovator of the Year. Prof. Fu received his Ph.D. in EECS from MIT when his research pertained to secure storage and web authentication. He also holds a certificate of achievement in artisanal bread making from the French Culinary Institute. He has a doppelganger who works on energy-aware embedded systems.

View original page

20 March 14:00Insecurity Engineering in Locks / Marc Weber Tobias

Lecture Theatre 2, Computer Laboratory, William Gates Building

Insecure designs in physical security locks, safes, and other products have consequences in terms of security, liability, and even loss of life. Marc Weber Tobias and his colleague, Tobias Bluzmanis will discuss a number of cases involving design issues that allow locks and safes to be opened in seconds. In one instance, the insecurity of a gun safe led to the death of a three year old child in the United States. Marc will demonstrate different products that appear secure but in fact are not. A case example will also be presented that involved a lock from Finland that is a perfect example of insecurity engineering. This patented and award winning design appears quite secure, utilizing electronic credentials and yet is seriously flawed.

Speaker's Bio:
Marc is a physical security expert in the United States who is an investigative attorney and leads a team of specialists who analyze locks and security hardware for many of the largest lock manufacturers in the world.

View original page

14 March 10:00Malleability in Modern Cryptography / Markulf Kohlweiss, MSRC

Large lecture theatre, Microsoft Research Ltd, 7 J J Thomson Avenue (Off Madingley Road), Cambridge

In recent years, malleable cryptographic primitives have advanced from being seen as a weakness allowing for attacks, to being considered a potentially useful feature. Malleable primitives are cryptographic objects that allow for meaningful computations, as most notably in the example of fully homomorphic encryption. Malleability is, however, a notion that is difficult to capture both in the hand-written and the formal security analysis of protocols.

In my work, I look at malleability from both angles. On one hand, it is a source of worrying attacks that have, e.g., to be mitigated in a verified implementation of the transport layer security (TLS) standard used for securing the Internet. On the other hand, malleability is a feature that helps to build efficient protocols, such as delegatable anonymous credentials and fast and resource friendly proofs of computations for smart metering. We are building a zero-knowledge compiler for a high-level relational language (ZQL), that systematically optimizes and verifies the use of such cryptographic evidence.

We recently discovered that malleability is also applicable to verifiable shuffles, an important building block for universally verifiable, multi-authority election schemes. We construct a publicly verifiable shuffle that for the first time uses one compact proof to prove the correctness of an entire multi-step shuffle. In our work, we examine notions of malleability for non-interactive zero-knowledge (NIZK) proofs. We start by defining a malleable proof system, and then consider ways to meaningfully ‘control’ the malleability of the proof system. In our shuffle application controlled-malleable proofs allow each mixing authority to take as input a set of encrypted votes and a controlled-malleable NIZK proof that these are a shuffle of the original encrypted votes submitted by the voters; it then permutes and re-randomizes these votes and updates the proof by exploiting its controlled malleability.

View original page

28 February 13:00A Proposed Framework for Analysing Security Ceremonies / Jean Martina (Federal University of Santa Catarina / Brazil)

Computer Laboratory, William Gates Building, Room SS03

The concept of ceremony as an extension to network and security protocols was introduced by Ellison. No methods or tools to check correctness or the properties in such ceremonies are currently available. The applications for security ceremonies are vast and *ll gaps left by strong assumptions in security protocols, like provisioning of cryptographic keys or correct human interaction. Moreover, no tools are available to check how knowledge is distributed among human peers and in their interaction with other humans and computers in these scenarios. The key component in this paper is the formalisation of human knowledge distribution in security ceremonies. By properly enlisting human expectations and interactions in security protocols, we can minimise the ill-described assumptions we usually see failing. Taking such issues into account
when designing or verifying protocols can help us to better understand where protocols are more prone to break due to human constraints.

View original page

24 February 14:00Don't kill my ads! Balancing Privacy in an Ad-Supported Mobile Application Market / Ilias Leontiadis (University of Cambridge)

FW26, Computer Laboratory, William Gates Builiding

Application markets have revolutionized the software download
model of mobile phones: third-party application developers offer
software on the market that users can effortlessly install on
their phones. This great step forward, however, also imposes some
threats to user privacy: applications often ask for permissions that
reveal private information such as the user’s location, contacts and
messages. While some mechanisms to prevent leaks of user privacy
to applications have been proposed by the research community,
these solutions fail to consider that application markets are
primarily driven by advertisements that rely on accurately profiling
the user. In this paper we take into account that there are two parties
with conflicting interests: the user, interested in maintaining
their privacy and the developer who would like to maximize their
advertisement revenue through user profiling. We have conducted
an extensive analysis of more than 250,000 applications in the Android
market. Our results indicate that the current privacy protection
mechanisms are not effective as developers and advert companies
are not deterred. Therefore, we designed and implemented
a market-aware privacy protection framework that aims to achieve
an equilibrium between the developer’s revenue and the user’s privacy.
The proposed framework is based on the establishment of a
feedback control loop that adjusts the level of privacy protection on
mobile phones, in response to advertisement generated revenue.

View original page

31 January 12:45The Yin and Yang Sides of Embedded Security / Christof Paar (Ruhr University Bochum)

SS03, Computer Laboratory, William Gates Building

Through the prevalence of interconnected embedded systems, the vision of pervasive computing has become reality over the last few years. As part of this development, embedded security has become an increasingly important issue in a multitude of applications. Examples include the Stuxnet virus, which has allegedly delayed the Iranian nuclear program, killer applications in the consumer area like iTunes or Amazon's Kindle, the business models of which rely heavily on IP protection, and even medical implants like pace makers and insulin pumps that allow remote configuration. These examples show the destructive and constructive aspects of modern embedded security. For us embedded security researchers, the following definition of yin and yang can be useful for resolving this seemingly conflict: "The concept of yin yang is used to describe how polar opposites or seemingly contrary forces are interconnected and interdependent in the natural world, and how they give rise to each other in turn." (OK, the "natural world" part is not a 100% fit here.) In this presentation I will talk about some of our research projects over the last few years which dealt with both the yin and yang aspect of embedded security.


In 1-2 generations of automobiles, car2car and car2infrastructure communication will be available for driver-assistance and comfort applications. The emerging car2x standards call for strong security features. The large number of data from up to several 1000 incoming messages per second, the strict cost constraints, and the embedded environment makes this a challenging task. We show how an extremely high-performance digital signature engine was realized using low-cost FPGAs. Our signature engine is currently widely used in field trials in the USA. The next case study addresses the other end of the performance spectrum, namely lightweight cryptography. PRESENT is one of the smallest known ciphers which can be realized with as few as 1000 gates. The cipher was designed for extremely cost and power constrained applications such as RFID tags which can be used, e.g., as a tool for anti-counterfeiting of spare parts, or for other low-power applications. PRESENT is currently being standardized by ISO.


As "yang examples" of our research we will show how two devices with very large deployment in the real world can be broken using physical attacks. First, we show a recent attack against a modern contactless smart card equipped with 3DES. The card is widely used in authentication and payment systems. The second attack breaks the bit stream encryption of current FPGAs. These are reconfigurable hardware devices which are popular in many digital systems. We were able to extract AES and 3DES key from a single power-up of the reconfiguration process. Once the key has been recovered, an attacker can clone, reverse engineer and alter a presumingly secure hardware design.

View original page

30 January 16:15Statistical Attack on RC4: Distinguishing WPA / Serge Vaudenay (EPFL)

Lecture Theatre 2, Computer Laboratory, William Gates Building

At Eurocrypt'11, we presented an attack framework on RC4 with applications to the analysis of WEP and WPA. We obtained an efficient distinguisher for WPA and the best theoretical key recovery attack on WPA so far. In this presentation we revisit these work and give new results. We identify several flaws in the analysis and correct them. This is joint work with Pouyan Sepehrdad and Martin Vuagnoux.

View original page

30 January 14:00Extracting Unknown Keys from Unknown Algorithms Encrypting Unknown Fixed Messages and Returning no Results / David Naccache (ENS)

Lecture Theatre 2, Computer Laboratory, William Gates Building

In addition to its usual complexity assumptions, cryptography silently assumes that information can be physically protected in a single location. As we now know, real-life devices are not ideal and confidential information leaks through different physical channels. Whilst most aspects of side channel leakage (cryptophthora) are now well understood, no attacks on totally unknown algorithms are known to date. This paper describes such an attack. By _totally unknown_ we mean that no information on the algorithm's mathematical description (including the plaintext size), the microprocessor or the chip's power consumption model is available to the attacker.

View original page

24 January 16:15User choice of PINs and passphrases / Joseph Bonneau (University of Cambridge)

Lecture Theatre 2, Computer Laboratory, William Gates Building

This talk will highlight work from two upcoming papers at Financial Cryptography and USEC which includes the first empirical data on how humans choose numerical PINs or multi-word passphrases. Combined with the increasing amount of data on password choice, we can introduce new statistical metrics to evaluate the security provided by human-chosen distributions.

View original page

10 January 10:00Homomorphic Encryption from Ring Learning with Errors / Michael Naehrig, Technische Universiteit Eindhoven

Large lecture theatre, Microsoft Research Ltd, 7 J J Thomson Avenue (Off Madingley Road), Cambridge

The prospect of outsourcing an increasing amount of data storage and management to cloud services raises many new privacy concerns that can be satisfactorily addressed if users encrypt the data they send to the cloud. If the encryption scheme is homomorphic, the cloud can still perform meaningful computations on the data, even though it is encrypted.
In fact, we now know a number of constructions of fully homomorphic encryption schemes that allow arbitrary computation on encrypted data.
In the last two years, solutions for fully homomorphic encryption have been proposed and improved upon, but all currently available options seem to be too inefficient to be used in practice. However, for many applications it is sufficient to implement somewhat homomorphic encryption schemes, which support a limited number of homomorphic operations. They can be much faster, and more compact than fully homomorphic schemes.

This talk will focus on describing the recent somewhat homomor- phic encryption scheme of Brakerski and Vaikuntanathan, whose security relies on the ring learning with errors (RLWE) problem.

2011

View original page

21 November 15:00Real-Time and Real Trustworthiness: Timing Analysis of a Protected OS Kernel / Bernard Blackham (NICTA)

FW26, Computer Laboratory, William Gates Builiding

Protected operating systems have been an elusive target of static worst-case execution time (WCET) analysis, due to a combination of their size, unstructured code and tight coupling with hardware. As a result, critical hard real-time systems are usually developed without memory protection, in order to provide guarantees on their response time.

In this talk, I will explore a WCET analysis of seL4, a third-generation microkernel. seL4 is the world’s first formally-verified operating-system kernel, featuring machine-checked correctness proofs of its complete functionality. This makes seL4 an ideal platform for security-critical systems. Adding temporal guarantees makes seL4 also a compelling platform for safety- and timing-critical systems. It enables hard real-time systems with less critical time-sharing components to be integrated on the same processor, supporting enhanced functionality while keeping hardware and development costs low.

The talk will focus on the more interesting aspects of the analysis, and in particular, properties of the seL4 code base which made life easier in the process.

This work was presented at: Real-time Systems Symposium 2011 (Vienna, Austria)

Bio: Bernard is a PhD candidate at the University of New South Wales and NICTA in Sydney, Australia. His PhD relates to real-time aspects of the seL4 microkernel. Bernard's research interests include static analysis, process checkpointing, and generally messing with anything executable. Bernard also trains the Australian team for the International Olympiad in Informatics.

View original page

17 November 16:00Quantifying Location Privacy / George Theodorakopoulos (EPFL, University of Derby)

SS03, Computer Lab, William Gates Building

The popularity of personal communication devices leads to serious concerns about privacy in general, and location privacy in particular.
As a response to these issues, a number of Location-Privacy Protection Mechanisms (LPPMs) have been proposed during the last decade. However, their assessment and comparison remains problematic because of the absence of a systematic method to quantify them. In particular, the assumptions about the attacker's model tend to be incomplete, with the risk of a possibly wrong estimation of the users' location privacy.

I will talk about how we address these issues by providing a formal framework for the analysis of LPPMs; it captures, in particular, the prior information that might be available to the attacker, and various attacks that he can perform. By formalizing the adversary's performance, we propose and justify the right metric to quantify location privacy. We find that popular privacy metrics, such as k-anonymity and entropy, do not correlate well with the success of the adversary in inferring users'
locations.

Joint work with R. Shokri, J.-Y. Le Boudec, and J.-P. Hubaux.


Bio: George Theodorakopoulos is a Lecturer/Senior Lecturer at the University of Derby. He received his B.Sc. at the National Technical University of Athens (NTUA), Greece, and his M.Sc. and Ph.D. at the University of Maryland, in 2002, 2004 and 2007, respectively. From
2007 to 2011, he was a senior researcher at EPFL working with Prof.
Jean-Yves Le Boudec.

His research is on network security, privacy, and trust. Together with his Ph.D. advisor, John S. Baras, he has received the best paper award at WiSe'04, the 2007 IEEE ComSoc Leonard Abraham prize, and he has co-authored the book "Path Problems in Networks" on algebraic
(semiring) generalizations of shortest path algorithms and their applications to networking problems.

View original page

31 October 13:00Web mining and privacy: foes or friends? / Bettina Berendt

SS03, William Gates Building

Web mining (i.e., data mining applied to Web content, link, or usage data) is often regarded as a premier foe of privacy, and techniques for "privacy-preserving data mining" are seen as remedies. In this talk, I want to challenge this view by investigating different notions of privacy and different forms and stages of Web mining. As part of this, I will highlight the importance of different perspectives and present tools we developed for analysing data in this way.

View original pageView slides/notes

25 October 16:15Facial Analysis for Lie Detection / Hassan Ugail (University of Bradford)

Lecture Theatre 2, Computer Laboratory, William Gates Building

This talk will centre around our recent work on computer based human facial analysis for lie detection. The talk will focus on how cues from both visual and thermal domain can be identified to detect potential deception. Discussions will also focus on the experimental setup through which sufficient data has been collected and analysed to determine the individual baseline which serves as the ground truth for interrogation. Further our current integrated setup for non-invasive lie detection will be outlined and future direction of this research will be discussed.

View original pageView slides

06 October 14:15Building Trusted Systems with Protected Modules / Bryan Parno (Microsoft Research)

Lecture Theatre 2, Computer Laboratory, William Gates Building

As businesses and individuals entrust more and more sensitive tasks (e.g., paying bills, shopping online, or accessing medical records) to computers, it becomes increasingly important to ensure this trust is warranted. However, users are understandably reluctant to abandon the low cost, high performance, and flexibility of today's general-purpose computers. In this talk, I will describe Flicker, an architecture for constructing protected modules. Flicker demonstrates that we can satisfy the need for features and security by constructing an on-demand secure execution environment, using a combination of software techniques and recent commodity CPU enhancements. This provides a solid foundation for constructing secure systems that must coexist with standard software; the developer of a security-sensitive code module need only trust her own code, plus as few as 250 lines of Flicker code, for the secrecy and integrity of her code's execution. However, for many applications, secrecy and integrity are insufficient; thus, I'll discuss techniques for providing practical state continuity for protected modules. To ensure the correctness of our design, we develop formal, machine-verified proofs of safety. To demonstrate practicality, we have implemented our architectures on Linux and Windows running on AMD and Intel.

Bio

Dr Bryan Parno, Microsoft Research Redmond, received the 2010 Doctoral Dissertation Award from ACM for "resolving the tension between adequate security protections and the features and performance that users expect in a digitized world" and has recently co-authored the book "Bootstrapping Trust in Modern Computers" with Jon McCune and Adrian Perrig.

2010 ACM doctoral dissertation award:

http://www.acm.org/press-room/news-releases/2011/dd-award-2010

Bootstrapping Trust in Modern Computers:

http://www.springerlink.com/content/k16537/

View original page

29 September 16:00Twitter bots / Miranda Mowbray (HP Labs Bristol)

FW26, Computer Laboratory, William Gates Builiding

A particular feature of some social networks, including Twitter, is that software programmes can act within them in a similar way to human beings – indeed, in some cases it may not be obvious whether you are communicating with a human being or a piece of software.

There has been a rapid increase in the amount of automated use of Twitter. I will give some examples of such use, and discuss some potential implications for Twitter data mining, and for security/privacy. My talk will include both some older results and results from some very recent data analysis.

View original page

08 September 13:30The IITM Model and its Application to the Analysis of Real-World Security Protocol / Ralf Küsters, University of Trier

Large lecture theatre, Microsoft Research Ltd, 7 J J Thomson Avenue (Off Madingley Road), Cambridge

A prevalent way in cryptography to design and analyze cryptographic protocols in a modular way is the simulation-based approach. Higher-level components of a protocol are designed and analyzed based on lower-level idealized components, called ideal functionalities. Composition theorems then allow to replace the ideal functionalities by their realizations, altogether resulting in a system without idealized components.

In this talk, I first provide some background on the simulation-based approach and then briefly introduce the Inexhaustible Interactive Turing Machine (IITM) model, a model which, compared to other models for simulation-based security, is particularly simple and expressive. Although modularity is key to tame the complexity of real-world security protocol analysis, simulation-based approaches have rarely been used to analyze such protocols. In the past few years, we have developed a framework for the faithful and modular analysis of real-world security protocols based on the IITM model. I will present this framework and also discuss what has hindered the use of the simulation-based approach before.

View original page

02 August 16:15The great censorship war of 2011: are we winning? / Mystery speaker

Lecture Theatre 2, Computer Laboratory, William Gates Building

A report from the front line

View original page

19 July 16:15Evolutionary Software Repair / Stephanie Forrest, University of New Mexico

Lecture Theatre 2, Computer Laboratory, William Gates Building

Bio: Stephanie Forrest is Professor of Computer Science at the University of New Mexico in Albuquerque, and she is Co-Chair of the Santa Fe Institute Science Board. Her research studies adaptive systems, including immunology, evolutionary computation, biological modeling, computer security, and software. Professor Forrest received M.S. and Ph.D. degrees in Computer and Communication Sciences from the University of Michigan and a B.A. from St. John's College. Before joining UNM in 1990 she worked for Teknowledge Inc. and was a Director's Fellow at the Center for Nonlinear Studies, Los Alamos National Laboratory. She currently serves on the Computing Research Association CCC Council.

View original page

17 May 16:15Practical Linguistic Steganography using Synonym Substitution / Ching-Yun (Frannie) Chang & Stephen Clark, University of Cambridge

Lecture Theatre 2, Computer Laboratory, William Gates Building

Linguistic Steganography is concerned with hiding information in a natural language text, for the purposes of sending secret messages. A related area is natural language watermarking, in which information is added to a text in order to identify it, for example for the purposes of copyright. Linguistic Steganography algorithms hide information by manipulating properties of the text, for example by replacing some words with their synonyms. Unlike image-based steganography, linguistic steganography is in its infancy with little existing work. In this talk we will motivate the problem, in particular as an interesting application for Natural Language Processing (NLP) and especially natural language generation. Linguistic steganography is a difficult NLP problem because any change to the cover text must retain the meaning and style of the original, in order to prevent detection by an adversary.

Our method embeds information in the cover text by replacing words in the text with appropriate substitutes. We use a large database of word sequences collected from the Web (the Google n-gram data) to determine if a substitution is acceptable, obtaining promising results from an evaluation in which human judges are asked to rate the acceptability of modified sentences.

View original page

09 May 14:00(Research) Influences of wind on radiowave propagation in foliated fixed wireless system / (Research) Information flow control for static enforcement of user-defined privacy policies / Sören Preibusch and Tien Han Chua

SS03, William Gates Building

Influences of wind on radiowave propagation in foliated fixed wireless system, Tien Han Chua

From field measurement data collected over a two-year period, the influences of wind speed and wind direction on temporal fading in foliated fixed wireless links will be presented. The physical wind-foliage interactions and radiowave propagation mechanisms which could contribute to such fading events will be discussed. Finally, the possibilities to model the temporal fading through ray tracing based on geometrical optics and uniform theory of diffraction will be investigated.

Information flow control for static enforcement of user-defined privacy policies, Sören Preibusch

Web sites for retailing or social networking could turn privacy into a competitive advantage as they implement superior data protection practices compared to alternative service providers. One important pre-requisite is the enforceability of privacy guarantees. In the past, information leaks at companies who promote themselves as privacy-friendly demonstrates that current certification practices seem insufficient.

Information flow control (IFC) allows software programmers and auditors to detect and prevent the sharing of information between different parts of a program which, as a matter of policy, should be kept logically separate. However, the lack of widespread use of IFC suggests technology and usability barriers to adoption.

I will review pragmatic issues and systematic limitations of using JIF, a programming language that provides IFC on top of Java. The emphasis will be on personal experiences and lessons learnt in implementing the first Web-based IFC case-study with customer-negotiated restrictions on data recipients and usage. As an outlook, I'll consider how combining server-side information flow control with client-side scripting could implement the sticky privacy policy paradigm.

View original page

05 May 14:15Introduction to MILS and the LynuxWorks Separation Kernel / Rance DeLong, LynuxWorks Inc.

Lecture Theatre 2, Computer Laboratory, William Gates Building

Abstract not available

View original page

04 May 15:30The Bluespec hardware definition language / Joe Stoy, Bluespec Inc.

Lecture Theatre 2, Computer Laboratory, William Gates Building

Abstract not available

View original pageView slides/notes

04 May 14:15Reflection on Java Security and Its Practical Impacts / Li Gong

Lecture Theatre 1, Computer Laboratory, William Gates Building

In this talk I look back to a (then) new Java security architecture that was designed 15 years ago and is now standard across all Java platforms,
and draw lessons from that experience. For example, design security technologies that are appropriate for the target set of "customers" (e.g., programmer or users?); manage the constant conflicts between the want (of the enforcers) to protect and the desire (of the enforced) for freedom; and why lasting impact is often practical rather than theoretical, given that no
useful security is absolute. This will not be a typical research talk, but I will throw in some anecdotal stories to (try) make it worthwhile.

Speaker's Bio: Li Gong was in the PhD program at the Computer Lab from 1987 till 1990. He had a flourishing research career before joining the newly formed JavaSoft in 1996 to become Chief Java Security Architect and led the design and implementation of a new Java security architecture that is now in common use today. His corporate career included general manager of Sun Microsystems China R&D center, general manager of the online division of MSN in China for Microsoft, and now CEO of Mozilla Online Ltd., the Beijing-based subsidiary of the Mozilla Corporation. He also has an entrepreneurial side and participated in a number of startups in the Sillicon Valley and in China.

He served as both Program Chair and General Conference Chair for ACM CCS, IEEE S&P, and IEEE CSFW. He was Associate Editor of ACM TISSEC and Associate Editor-in-Chief of IEEE Internet Computing. He held visiting positions at Cornell and Stanford, and was a Guest Chair Professor at Tsinghua University, Beijing. He has 14 issued US patents (2 of which were among the 7 patents that Oracle cited in the lawsuit against Google in August 2010), co-authored 3 books (published by Addison Wesley and O’Reilly) and many technical articles, and received the 1994 Leonard G. Abraham Award given by the IEEE Communications Society for “the most significant contribution to technical literature in the field of interest of the IEEE.”

View original page

03 May 15:45Architectures for Practical Client-Side Security / Virgil Gligor, Carnegie Mellon University

Lecture Theatre 2, Computer Laboratory, William Gates Building

Few of the security architectures proposed for the past four decades (e.g., fine-grain domains of protection, security kernels, virtual machines) have made a significant difference on client-side security. In this presentation, I examine some of the reasons for this and some of the lessons learned to date. Focus on client-side security is warranted primarily because it is substantially more difficult to achieve than server security in practice, since
clients interact with human users directly and have to support their security needs. I argue that system and application partitioning to meet user security
needs is now feasible [2,3,5], and that special focus must be placed on how to design and implement trustworthy communication between users and their
partitions and between partitions themselves.

Trustworthy communication goes beyond secure channels, firewalls, guards and filters. The extent to which one partition accepts input from or outputs to another depends on the trust established with the input provider and output receiver. It also depends on input-rate throttling and output propagation
control, which often require establishing some degree of control over remote communication end points. I illustrate some of the fundamental challenges of
trustworthy communication at the user level, and introduce the notion of optimistic trust with its technical requirements for deterrence for non-compliant input providers and output receivers. Useful insights for trustworthy communication are derived from the behavioral economics, biology
[1] and social [4] aspects of trust.

References

[1] E. Fehr, “On the Economics and Biology of Trust,” Journal of the European Economic Association, April – May 2009, pp. 235-266.

[2] B. Lampson, ``Usable Security: How to Get it,” Comm. of the ACM, vol. 52, no. 11, Nov. 2009.

[3] J. McCune, Y. Li, N. Qu, Z. Zhou, A. Datta, V. Gligor, and A. Perrig, ``TrustVisor: Efficient TCB Reduction and Attestation,” Proc. of IEEE Symp. on
Security and Privacy, Oakland, CA, May 2010.

[4] F. Stajano and P. Wilson, “Understanding Scam Victims: Seven Principles for Systems Security,” University of Cambridge Computing Laboratory,
UCAM-CL-TR-754, Aug. 2009.

[5] A. Vasudevan, B. Parno, N. Qu, V. Gligor and A. Perrig, ``Lockdown: A Safe and Practical Environment for Security Applications,” Technical Report,
CMU-CyLab-09-011, July 14, 2009.

View original page

03 May 14:45CTSRD: Capability CPUs revisited / Peter Neumann, SRI International / Robert Watson, University of Cambridge

Lecture Theatre 2, Computer Laboratory, William Gates Building

Abstract not available

View original page

03 May 14:15An overview of the DARPA CRASH research programme / Howie Shrobe, DARPA

Lecture Theatre 2, Computer Laboratory, William Gates Building

Abstract not available

View original page

28 April 13:15Using the Cambridge ARM model to verify the concrete machine code of seL4 / Magnus Myreen (University of Cambridge)

Computer Laboratory, William Gates Building, Room SS03

The L4.verified project has proved functional correctness of C code
which implements a general-purpose operating system. The C code is
about 10,000 lines long and is designed to run on ARM processors. The
200,000-line L4.verified proof currently bottoms out at the level of C
code, i.e. the C compiler is currently a trusted component in the
intended workflow.

In this talk, we will describe how we are using the Cambridge model of
the ARM instruction set architecture (ISA) to remove the C compiler
from the trusted computing base. That is, we are extending the
existing L4.verified proof downwards so that it bottoms out at a much
lower level, namely, the concrete ARM machine code which runs directly
on ARM hardware.

The L4.verified project and the Cambridge ARM project have for years
been developed independently of one another. The main challenge is
now: how do we bridge the gap between these separate projects? Our
solution is to apply a technology, which we call, decompilation into
logic. Our tool, a decompiler, translates ARM machine code into
functional programs that are automatically verified to be functionally
equivalent with respect to the Cambridge model of the ARM ISA. We
apply our decompiler to the output of the C compiler to turn the seL4
binary into a large functional program. A connection can then be
proved semi-automatically between this functional program and the
semantics of the C code used in the L4.verified proof.

This talk describes ongoing work which, when complete, will remove the
need to trust the C compiler and the C semantics. The new proof will
instead have the Cambridge ARM model as a trusted component.

This is joint work with Thomas Sewell, Michael Norrish and Gerwin
Klein of NICTA, Australia.

View original page

13 April 16:00Mobile Social Networks and Context-Awareness: Usage, Privacy and a Solution to Texting While Driving? / Janne Lindqvist (Carnegie Mellon University)

Lecture Theatre 2, Computer Laboratory, William Gates Building

Many location-sharing systems have been developed over the past 20 years, and only recently have these
systems started to be adopted by consumers. One of the major successes so far in terms of user-adoption is Foursquare, which reports having ca. 7.5 million users as of March 2011. We studied both qualitatively and quantitatively how and why people use Foursquare, and how they manage their privacy. We will report on our user studies and surprising uses of Foursquare. Furthermore, we will discuss our ongoing work in using context-awareness and location sharing to nudge
people not to use their mobile phones while driving.

Speaker's bio:
Janne Lindqvist is a Postdoctoral Fellow with the Human-Computer Interaction Institute at Carnegie Mellon University. Janne works at the intersection of mobile computing, systems security and human-computer interaction. His current projects include usable privacy interfaces for mobile phones, context-aware mobile mashups, and mitigating problems with mobile
phone usage while driving. Before joining the academia, Janne co-founded a wireless networks company, Radionet, which was represented in 24 countries before being sold to Florida-based Airspan Networks during 2005.

View original page

12 April 16:15What is Software Assurance? / John Rushby, SRI International

Lecture Theatre 2, Computer Laboratory, William Gates Building

Safety-critical systems must be supplied with strong assurance that they are, indeed, safe. Top-level safety goals are usually stated quantitatively--for example, "no catastrophic failure in the lifetime
of all airplanes of one type"--and these translate into probabilistic requirements for subsystems, and hence for software. In this way, we obtain quantitative reliability requirements for software: for example, the probability of failure in flight-critical software must not exceed 10-9 per hour.

But the methods by which assurance is developed for critical systems are mostly about correctness (inspections, formal verification, testing etc.) and these do not seem to support quantitative reliability claims. Furthermore, more stringent reliability goals require more extensive correctness-based assurance. How does more assurance of correctness deliver greater reliability?

I will resolve this conundrum by arguing that what assurance actually does is provide evidence for assessing a probability of "possible perfection." Possible perfection does relate to reliability and has
other attractive properties that I will describe. In particular, it allows assessment of the reliability of certain fault-tolerant architectures. I will explain how formal verification can allow assessment of a probability of perfection, and will discuss plausible values for this probability and consequences for correctness of verification systems themselves.

This is joint work with Bev Littlewood of City University, London UK.

View original page

06 April 14:00Netalyzr: Network Measurement as a Network Security Problem / Nicholas Weaver, ICSI and UC Berkeley

Lecture Theatre 2, Computer Laboratory, William Gates Building

Netalyzr, at http://netalyzr.net, is a widely used network measurement and debugging tool, with over 180,000 executions to date. Netalyzr is a signed Java applet coupled to a custom suite of test servers in order to detect and debug problems with DNS, NATs, hidden HTTP proxies, and other issues. Netalyzr has revealed many problems in the Internet landscape, ranging from broken NAT DNS resolvers, hidden
caches and malfunctioning proxies, to deliberate ISP manipulations of DNS results, including some ISPs which use DNS to man-in-the-middle search properties like Yahoo, Google, and Bing. Although Netalyzr is
a network measurement tool, writing it was a network security process, designed to detect unusual conditions by deliberately bending (or outright breaking) protocol specifications, using unintended features of Java, and a general dose of "sneaky".

This talk discusses the design of Netalyzr, interesting cases observed during development, and highlights some of the interesting results including HTTP caches, hidden proxies, chronic overbuffering, and DNS misbehaviors.

View original page

15 March 16:15Caveat coercitor: towards coercion-evident elections / Mark Ryan (University of Birmingham)

Lecture Theatre 2, Computer Laboratory, William Gates Building

It has proved very difficult, and is perhaps impossible, to design an electronic voting system which satisfies the three desired properties of voter-incoercibility, results-verifiability, and usability. Therefore, we
have looked at forgoing incoercibility, and replacing it with "coercion evidence" -- after an election, it will be possible for observers to see how much coercion has taken place, and therefore whether the results
constitute a mandate for the winner. The system we describe is intended to be practical to use.

The talk will include an introduction to the concerns and issues of electronic voting, as well as a brief survey of existing systems. The body of the talk ongoing, unpublished work. I will welcome comments during and after the seminar.

View original page

03 March 16:00Promoting location privacy... one lie at a time / Daniele Quercia (University of Cambridge)

SS03 of the Computer Lab

Nowadays companies increasingly aggregate location data from different sources on the Internet to offer location-based services such as estimating current road traffic conditions, and finding the best nightlife locations in a city. However, these services have also caused outcries over privacy issues. As the volume of location data being aggregated expands, the comfort of sharing one's whereabouts with the public at large will unavoidably decrease. Existing ways of aggregating location data in the privacy literature are largely centralized in that they rely on a trusted location-based service.
Instead, we propose a piece of software (SpotME) that can run on a mobile phone and allows privacy-conscious users of location-based services to report, in addition to their actual locations, also some erroneous locations. The erroneous locations are selected by a randomized response algorithm in a way that makes it possible to accurately collect and process aggregated location data without affecting the fidelity of the result. We evaluate the accuracy of SpotME in estimating the number of people in a certain location upon two very different realistic mobility traces: the mobility of vehicles in urban, suburban and rural areas, and the mobility of subway train passengers in Greater London. We find that erroneous locations have little effect on the estimations (in both traces, the error is below 18% for a situation in which more than 99% of the locations are erroneous), yet they guarantee that users cannot be localized with high probability. Also, the computational and storage overheads for a mobile phone running SpotME are negligible, and the communication overhead is limited (SpotME adds an overhead of 21 byte/s).

View original page

23 February 14:15Reasoning about Software Safety Integrity and Assurance / Tim Kelly, University of York

Lecture Theatre 1, Computer Laboratory

With increasing amounts of software being used within safety critical applications, there is growing concern as to how designers and regulators can justify that this is software is sufficiently safe for use. At the system level, it is reasonable and sensible to talk in terms of risk mitigation, and to establish arguments that the probability of occurrence of identified risks is acceptably low. Whilst it is not difficult to cascade these risk-based requirements to software, it becomes extremely difficult to reason about software system failure probabilistically (for all but trivial examples). Instead, qualitative arguments and evidence (concerning the satisfaction of specific software safety properties and requirements) are instead typically offered up. These can be test-based arguments, or analytic (e.g.) proof-based arguments. However, these arguments (even when deductive reasoning is employed) cannot be established with absolute certainty. There remains epistemic uncertainty surrounding such approaches: Has the software (and its interface with the real world) been modeled adequately? Can the abstractions used be justified? Are the tools used in the process qualified? This talk will examine the problems of exchanging safety arguments concerning real-world risk (associated with aleatoric uncertainty) for issues of confidence associated with software safety arguments (associated with epistemic uncertainty). We’ll present these concerns in the context of structured (but informal) argumentation approaches used within software safety justifications, and the guidance that we have developed for safety-critical industries as part of the Software Systems Engineering Initiative (www.ssei.org.uk).


Biography

Dr Tim Kelly is a Senior Lecturer within the Department of Computer Science at the University of York. He is Academic Theme Leader for Dependability within the Ministry of Defence funded Software Systems Engineering Initiative, and was Deputy Director of the Rolls-Royce Systems and Software Engineering University Technology Centre. His research interests include safety case management, software safety analysis and justification, software architecture safety, certification of adaptive and learning systems, and the dependability of “Systems of Systems”. He has supervised a number of research projects in these areas with funding and support from the European Union, EPSRC, Airbus, Railway Safety and Standards Board, Rolls-Royce BAE Systems and the Ministry of Defence. Dr Kelly has published over 140 papers on safety-critical systems development and assurance issues.

View original page

01 February 13:00A report on the IAB/W3C Internet Privacy Workshop / Dr. David Evans (Computer Laboratory)

Computer Laboratory, William Gates Building, Room FW11

Back in December I went to the IAB/W3C Internet Privacy Workshop (http://www.iab.org/about/workshops/privacy/) at MIT. I'll outline what was said and where the emphasis lay.
In summary: the browser is really important and authors of standards documents should include a section on relevance to privacy.

2010

View original pageView slides

09 December 14:15Reverse Engineering Malware / Hassen Saidi, SRI International

Lecture Theatre 2, Computer Laboratory, William Gates Building

Program analysis is a challenging task when source code is available. It is even more challenging when neither the source code nor debug information is present. The analysis task is rendered even more challenging when the code has been obfuscated to prevent the analysis from being carried out. Malware authors often employ a myriad of these evasion techniques to impede automated reverse engineering and static analysis efforts of their binaries. The most popular technologies include "code obfuscators" that serve to rewrite the original binary code to an equivalent form that provides identical functionality while defeating signature-based detection systems. These systems significantly complicate static analysis, making it challenging to uncover the malware intent and the full spectrum of embedded capabilities. While code obfuscation techniques are commonly integrated into contemporary commodity packers, from the perspective of a reverse engineer, deobfuscation is often a necessary step that must be conducted independently after unpacking the malware binary. In this presentation, we review the main challenges when analyzing binary programs and explore techniques for recovery of information that allows program understanding and reverse-engineering. In particular, we describe a set of techniques for automatically unrolling the impact of code obfuscators with the objective of completely recovering the original malware logic. We have implemented a set of generic debofuscation rules as a plug-in for the popular IDA Pro disassembler. We use sophisticated obfuscation strategies employed by two infamous malware instances from 2009, Conficker C and Hydraq (the binary associated with the Aurora attack) as case studies. In both instances our deobfuscator enabled a complete decompilation of the underlying code logic. This work was instrumental in the comprehensive reverse engineering of the heavily obfuscated P2P protocol embedded in the Conficker worm.

View original pageView slides/notes

07 December 16:15Bumping attacks: the affordable way of obtaining chip secrets / Sergei Skorobogatov - Computer Laboratory ( University of Cambridge)

Lecture Theatre 2, Computer Laboratory, William Gates Building

This talk presents a new class of fault injection attacks called bumping attacks. These attacks are aimed at data extraction from secure embedded memory, which usually stores critical parts of algorithms, sensitive data and cryptographic keys. As a security measure, read-back access to the memory is not implemented leaving only authentication and verification options for integrity check. Verification is usually performed on relatively large blocks of data, making a brute force searching infeasible. I will evaluate memory verification and AES authentication schemes used in secure microcontrollers and a highly secure FPGA. By attacking the security in three steps, the search space can be reduced from infeasible 2 to the 100 to affordable 2 to the 15 guesses per block of data. This development was achieved by finding a way to preset certain bits in the data path to a known state using semi-invasive optical bumping. Further improvements to these attacks involved using non-invasive power glitching technique for the secure microcontroller. Partial reverse engineering of the FPGA made bumping attacks possible via the use of non-invasive threshold voltage alteration combined with power glitching. Research into positioning and timing dependency showed that Flash memory bumping attacks are relatively easy to carry out.

View original page

23 November 16:15Physical Attacks on PIN Entry Devices / Matt Scott, ACI Worldwide

Lecture Theatre 2, Computer Laboratory, William Gates Building

Since the implementation of EMV (Chip and PIN) into mainstream retail banking environments of Western Europe there has been an exponential increase in physical attacks against PIN Entry Devices - this seminar attempts to appraise the audience of the methods of attack, the impact to consumers and the preventive measures to stem these attacks.

View original page

16 November 16:15The distribution of different sources of malware / Francis Turner, ThreatSTOP Inc.

Lecture Theatre 2, Computer Laboratory, William Gates Building

The compromised systems that push malware (and its products such as spam) are widespread on the Internet today. Many researchers and experts have claimed
that certain countries seem to "specialize" in particular sorts of malware without attempting to note whether there is any correlation between the
different sorts. Thanks to a database that contains up to four years' worth of data the speaker believes he can draw some conclusions about where different
sorts of malware originate and how compromised computers (and hence) IP addresses change what malware they deliver.

Speaker bio

Francis Turner is VP Product Management for ThreatSTOP Inc., a leader in the IP reputation space. He has worked for over 20 years in the IT and data
communication industries, starting with a stint at IBM in the mid 1980s before reading Computer Science at Magdalene College, Cambridge. Subsequently he
worked for Madge Networks and Bay Networks. After the latter merged with Nortel, he became the European Product Manager for their enterprise switching
division. In 2001 he left Nortel Networks to be CIO at a small biotech company that was seminal in the use of computation in the analysis and creation of new
enzymatic processes. Most recently he worked at a consultancy firm assisting ICT companies with their multinational product marketing and business development.

View original page

09 November 16:15Privacy preserving smart-metering / George Danezis, Microsoft Research

Lecture Theatre 2, Computer Laboratory, William Gates Building

Metering consumption and billing has been a traditional reason to collect, process and store detailed records. Proposed business models and government practices, such as electronic road tolling, pay-as-you-drive-insurance, smart-grids for electricity and even virtualised computing and storage, rely on charging users using even more fine grained information than ever before for their usage and consumption. This is at odds with the privacy consumers have been accustomed to. Current implementation proposals require huge databases of personal information to be built -- we show that these are not necessary.

We present protocols for metering and fine grained billing that do not require the collection, processing or storage of personal information. We focus on the example of smart-grids to show how meter readings can be cryptographically transformed by users' devices to apply a tariff policy, and produce a bill for the utility companies. Using Zero-knowledge techniques our protocols perfectly hide all privacy sensitive information, while protecting the integrity of the bills. We also discuss practical deployment issues and 3 implementations providing different trade-offs in speed, scalability and software correctness.

View original page

05 November 14:00Using SAT Solvers for Cryptographic Problems / Mate Soos, Pierre and Marie Curie University

Small lecture theatre, Microsoft Research Ltd, 7 J J Thomson Avenue (Off Madingley Road), Cambridge

As SAT solvers have become more advanced, their use-cases have expanded. One such area where SAT solves now competitively perform is cryptography. In this talk we investigate why and how SAT solvers are used in cryptography, and what advantages they bring relative to other solving methods such as brute force or Grobner basis algorithms. We present several specific use-cases and highlight some future possibilities.

View original page

02 November 16:15An investigation into Chinese cybercrime and the underground economy in comparison with the West / Michael Yip, University of Southampton

Lecture Theatre 2, Computer Laboratory, William Gates Building

With 420 million Internet users, China has become the world’s largest Internet population and the Chinese cyber-security has become globally significant. In this investigation, cybercrimes in China were studied from both sociological and technical perspectives using an array of methods including literature review, passive monitoring of online forums and interest groups as well as establishing direct contact with the Chinese cybercriminals.

Hacking was found to be immensely popular in China with a population of 3.8 million registered users spanning across just 19 online hacker forums. Financial and political factors were found to be the main motivations for Chinese cybercriminals. Observations from the Chinese hacktivist forums during recent Chinese cyber-attacks against Japan has brought to light some valuable insights into the true state of hacktivism in China and the level of tolerance from the Chinese government towards such actions.

Furthermore, it was found that not only do organised cybercrimes exist in China but also an underground economy as sophisticated as that in the West is flourishing at a rapid pace. Estimates from Chinese security experts suggest that the size of the Chinese underground economy may be much larger than that observed in the West. With the support of the Serious Organised Crime Agency (SOCA), the frameworks of organised cybercrime as observed in the West were compared with those observed in China. Significant similarities and differences were found including differences in the tools of trade used and some of the pricing of goods and services advertised in the underground economy. A generic mapping of the underground economy was deduced from the comparison of frameworks.

View original pageView slides

25 October 16:00Understanding Cyberattack as an Instrument of U.S. Policy / Herbert Lin (The National Academies, USA)

LT1, Computer Laboratory, William Gates Builiding

Much has been written about the possibility that terrorists or hostile nations might conduct cyberattacks against critical sectors of the U.S.
economy. However, the possibility that the United States might conduct its own cyberattacks -- defensively or otherwise -- has received almost no public discussion. Recently, the US National Academies performed a comprehensive unclassified study of the technical, legal, ethical, and policy issues surrounding cyberattack as an instrument of U.S. policy. This talk will provide a framework for understanding this emerging topic and the critical issues that surround it.

Bio: Dr. Herbert Lin is chief scientist at the Computer Science and Telecommunications Board, National Research Council of the National Academies, where he has been study director of major projects on public policy and information technology. These studies include a 1996 study on national cryptography policy (Cryptography's Role in Securing the Information Society), a 1991 study on the future of computer science (Computing the Future), a 1999 study of Defense Department systems for command, control, communications, computing, and intelligence (Realizing the Potential of C4I: Fundamental Challenges), a 2000 study on workforce issues in high-technology (Building a Workforce for the Information Economy), a
2002 study on protecting kids from Internet pornography and sexual exploitation (Youth, Pornography, and the Internet), a 2004 study on aspects of the FBI's information technology modernization program (A Review of the FBI's Trilogy IT Modernization Program), a 2005 study on electronic voting (Asking the Right Questions About Electronic Voting), a 2005 study on computational biology (Catalyzing Inquiry at the Interface of Computing and Biology), a 2007 study on privacy and information technology (Engaging Privacy and Information Technology in a Digital Age), a 2007 study on cybersecurity research (Toward a Safer and More Secure Cyberspace), a 2009 study on healthcare informatics (Computational Technology for Effective Health Care: Immediate Steps and Strategic Directions), and a 2009 study on offensive information warfare (Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities). Prior to his NRC service, he was a professional staff member and staff scientist for the House Armed Services Committee (1986-1990), where his portfolio included defense policy and arms control issues. He received his doctorate in physics from MIT.

View original pageView slides

20 October 16:15Across the Pond: An Update on Health Privacy and Health Data Security. How are American patients faring? / Deborah C. Peel, Founder and Chair, Patient Privacy Rights

Lecture Theatre 2, Computer Laboratory, William Gates Building

I will offer an environmental scan of 'real life' examples of privacy and security disasters, discuss recent developments at the federal and state levels, look at the latest evidence of patients' expectations, and conclude with solutions.

The stimulus bill and healthcare reform require massive adoption of EHRs/HIT and data exchanges with very fast timetables for spending the funds. Despite strong new consumer rights and protections, the regulatory process has led to weakening, delaying, and eliminating consumer protections. So the billions will go to fund state and federal initiatives will not be trusted. Model-Ts are being bought, not electric cars, locking in dinosaur systems and technologies.

While industry and many staff in government agencies continue to actively oppose privacy rights and patient consent, the heads of several major agencies, including HHS, the FTC, and the FCC have stepped up to affirm an Administration-wide shift to individual control over personal information. Lawsuits are beginning. The public is getting more alarmed not less.

Will the US become the world's most comprehensive surveillance state or will Americans wake up?

View original page

12 October 16:15Hierarchies, Lowerarchies, Anarchies, and Plutarchies: Historical Perspectives of Composably Layered High-Assurance Architectures / Peter Neumann, Principal Scientist, SRI International Computer Science Lab

Lecture Theatre 2, Computer Laboratory, William Gates Building

This talk will consider some of the challenges of holistically designing predictably trustworthy system and network architectures, with consideration of various past efforts and some prospects for the future. In scope are topics such as what might be called the father and son of hierarchical trustworthy systems, respectively Multics (rings, symbolic dynamic linking, nested directories) and SRI's Provably Secure Operating System PSOS design (tagged and typed more-or-less object-oriented capabilities in hardware and software), MLS and MILS architectures (beginning with KSOS and KVM), separation kernels and virtual machines (with pointers to Rushby and DeLong's recent work). Some of the underlying concepts are of course abstraction, modularity, strong encapsulation, explicit mappings between layers, explicit dependency analyses, high assurance, and basic principles that can enhance modular composition, considered in my DARPA CHATS report, Principled Assuredly Trustworthy Composable Architecture. As an example of the pervasive interdependencies that must be addressed, I will briefly summarize some aspects of A Roadmap for Cybersecurity Research that we developed for Doug Maughan at the U.S. Department of Homeland Security, November 2009.

NOTE: Plutarch's writings (e.g., Parallel Lives) stimulated among Romans considerable sense of the importance of understanding historical people and events. He observed that little seemed to have changed in human nature. We might observe today that in some regards relatively little has changed in the commercial development of high-assurance systems, despite some major advances in the research communities. We would like to fix that in the future.

View original page

08 October 14:00Extracting the Semantic Signature of Malware, Metamorphic Viruses and Worms / RK Shyamasundar; Tata Institute of Fundamental Research, Mumbai

FW11, Computer Laboratory

[Shyam is visiting the CL until 14 October 2010.]

Malware is increasingly becoming a serious threat and a nuisance in the information and network age. Human experts extract (involves complex analysis of encrypted and/or packed binaries) a signature (usually a text pattern) of the malware and deploy it, to protect against a malware.

However, this approach does not work for polymorphic and metamorphic malware, which have the ability to change shape from attack to attack; also, metamorphic virus detection (even assuming fixed length) is NP-complete. To
counter these advanced forms of malware we need semantic signatures which capture the essential behaviour of the malware (which remains unchanged across variants).
In this talk, we present an algorithmic approach for extracting the semantic signature of a malware -- as a regular expression over API calls -- and demonstrate via experiments its efficacy in detecting and predicting malware variants. Our approach involves two steps. In the first step, we collect and abstract the behaviour (as a sequence of security relevant API/system calls)
of the malware in different runs. In the second step, we inductively learn (under the supervision of a human expert) a regular expression that tightly fits these behaviours (generalizing where necessary). This regular expression then acts as the semantic signature of the malware. We performed experiments with the metamorphic virus Etap/Simile, and the email worms Beagle, Netsky and MyDoom.

Experimental results give us a good confidence
that our approach can be effectively used for malware detection.

View original page

01 July 16:15Colour, usability and computer security / Jeff Yan, Newcastle University

Room SS03, Computer Laboratory, William Gates Building

The use of colour in user interfaces is extensive. It is typically a usability issue, and has rarely caused any security concerns. In this talk, I show that the use of colours in the design of CAPTCHA, a standard security technology that has found widespread applications in commercial websites, can have interesting but critical implications on both security and usability. For example, we have broken multiple CAPTCHAs, including the scheme deployed by Megaupload.com (one of the largest file sharing websites), by exploiting the colour patterns in these schemes.

View original page

04 May 15:00Side-Channel Cryptanalysis / Joseph Bonneau (Cambridge University)

LT1, Computer Laboratory, William Gates Builiding

Abstract not available

View original page

04 May 14:00Privacy as a competitive advantage / Sören Preibusch

SS03, William Gates Building

This work-in-progress talk reports on results from field experiments and market analysis to quantify opportunities for online retailers in making privacy a competitive advantage.

View original page

29 April 16:00Detecting Temporal Sybil Attacks / Neal Lathia (UCL)

FW26, Computer Laboratory, William Gates Builiding

Recommender systems are vulnerable to attack: malicious users may deploy a set of sybils to inject ratings in order to damage or modify the output of Collaborative Filtering (CF) algorithms. Previous work in the area focuses on designing sybil profile classification
algorithms: to protect against attacks, the aim is to find and isolate any sybils. These methods, however, assume that the full sybil profiles have already been input to the system. Deployed recommender systems, on the other hand, operate over time: recommendations may be damaged as sybils inject profiles (rather than only when all the malicious ratings have been input), and system administrators may not know when their system is under attack. In this work, we address the problem of temporal sybil attacks, and propose and evaluate methods for monitoring global, user and item behaviour over time in order to detect rating anomalies that reflect an ongoing attack. We conclude by discussing the consequences of our temporal defenses, and how attackers may design ramp-up attacks in order to circumvent them.

Bio: Neal is a Research Fellow in the Department of Computer Science, University College London, working on the EU iTour project with Dr Capra (http://www.itourproject.com). His PhD thesis (to be imminently
submitted) was supervised by Prof. Hailes and titled "Evaluating Collaborative Filtering Over Time;" the thesis dealt with modeling, evaluating, and improving the temporal performance of recommender systems. More details are available on:
http://www.cs.ucl.ac.uk/staff/n.lathia

View original page

27 April 16:15Internet Voting: Threat or Menace / Jeremy Epstein, SRI International

Lecture Theatre 2, Computer Laboratory, William Gates Building

Internet voting - or at least the possibility of Internet voting - is
on the lips of legislators and election officials across the US and
around the world, including in the UK. But what does Internet
voting really mean - is it registering online, requesting a ballot
online, printing a blank ballot online, or casting your vote online?
Will Internet voting really save money and increase participation?
What are the risks of voting online, and how can we mitigate those
risks? How does identity validation relate to Internet voting? What
can we learn from other technologies that have moved online? How
do we (and should we) answer the frequently asked question 'if I
can bank online and shop online, why can't I vote online'? This
talk will address these questions and more on one of the hottest
topics at the intersection of technology and public policy.

View original page

26 April 15:00Building Secure Systems On and For Social Networks / Nishanth Sastry

LT1, Computer Laboratory, William Gates Builiding

Abstract not available

View original page

21 April 14:15Privacy in Advertising: Not all Adware is Badware / Paul Francis - MPI Kaiserslautern, Germany

Lecture Theatre 1, Computer Laboratory

Online advertising is a major economic force in the Internet today.
Today's deployments, however, increasingly erode user privacy as advertising companies like Google increasingly target users. In this talk, we suggest that it is possible to build an advertising system that fits well into the existing online advertising business model, targets users extremely well, is very private, and scales well. The key to our approach is adware: client computers run a local software agent that profiles the user, requests the appropriate ads, displays the locally, and reports on views and clicks, all while preserving privacy. This talk outlines the system design, and discusses its pros and cons.

*Bio:*

Paul Francis is a tenured faculty at the Max Planck Institute for Software Systems in Germany. Paul has held research positions at Cornell University, ACIRI, NTT Software Labs, Bellcore,and MITRE, and was Chief Scientist at two Silicon Valley startups. Paul's research centers around routing and addressing problems in the Internet and P2P networks. Paul's innovations include NAT, shared-tree multicast, the first P2P multicast system, the first DHT (as part of landmark routing), and Virtual Aggregation. These days, Paul is wondering why so much of our private data is being held in the cloud.

View original page

16 April 14:00Declassification Policy Inference / Jeff Vaughan (Harvard University)

Room FW11, Computer Laboratory, William Gates Building

Security-type systems can provide strong information security guarantees
but often require enormous programmer effort to be used in practice. In
this talk, I will describe inference of fine-grained, human-readable
declassification policies as a step towards providing security
guarantees that are proportional to a programmer's effort: the
programmer should receive weak (but sound) security guarantees for
little effort, and stronger guarantees for more effort.

I will present an information-flow type system with where policies may
be inferred from existing program structure. The inference algorithm
can find precise and intuitive descriptions of potentially dangerous
information flows in a program, and policies specify what information is
released under what conditions. A semantic security condition specifies
what it means for a program to satisfy a policy.

Our work demonstrates the soundness of an analysis for programs in a
simple imperative language with exceptions. Furthermore, we have
extended the analysis to an object-sensitive interprocedural analysis
for single-threaded Java 1.4 programs and developed a prototype
implementation.

View original page

18 March 16:15The Path Towards Scalable Practical Security for Web Transactions / Dr Corrado Ronchi, EISST Ltd

Lecture Theatre 2, Computer Laboratory, William Gates Building

The focus of this presentation will be to review the current status of Web transaction security and address the question of why e-criminals still enjoy the upper hand notwithstanding the availability of means for achieving strong transaction security. In particular, the following topics will be addressed:

* the failure of strong multi-factor authentication methods
* a taxonomy of attack vectors as the basis for a proper evaluation of protection strength
* the need for a multi-layered approach to transaction security
* how application hardening impacts the e-crime economics (or hacking ROI)
* a new method for dynamic application authentication
* the impact of usability on security: how to thwart a provably secure transaction validation method

View original page

11 March 16:00How Google Tests Software / James Whittaker (Google)

LT2, Computer Laboratory, William Gates Builiding

The mythology around Google Test runs like a ghostly spirit through the larger software quality community. Google automates everything. Google's cloud is the ultimate tester playground. Sometimes myth is larger than reality and sometime the reverse is true. In this talk James Whittaker will dispel some Google Test myths and reinforce others. There is indeed a secret sauce we mix into our product quality efforts and many of its flavors can be sampled in this short presentation.

- Test machines and test labs available in any number, on-demand
- Developer resources and skill set applied to testing
- Internal tools that trump commercially available ones
- Innovation is the soup du jour of the Google tester

Speaker: James A. Whittaker joined Google in May 2009 as a Test Engineering Director. Formerly an Architect with Microsoft’s Visual Studio Team System, he directed product strategy for Microsoft’s test business and led internal teams in the application of exploratory testing. Dr. Whittaker previously served as Professor of Computer Science at Florida Tech. There, he was named a Top Scholar by The Journal of Systems and Software, and led a research team that created many leading-edge testing tools and technologies, including the acclaimed runtime fault injection tool Holodeck. Whittaker is author of Exploratory Software Testing: Tips, Tricks, Tours and Techniques to Guide Test Design and How to Break Software. He is coauthor (with Hugh Thompson) of How to Break Software Security, co-author (with Mike Andrews) of How to Break Web Software and author of 50+ peer-reviewed papers on software development and security, and the holder of patents on various inventions in security testing and defensive security techniques. Dr. Whittaker has a PhD in computer science from the University of Tennessee.

View original page

10 March 14:15Aura: A Programming Language with Authorization and Audit / Steve Zdancewic - University of Pennsylvania, USA

Lecture Theatre 1, Computer Laboratory

Existing mechanisms for authorizing and auditing the flow of
information in networked computer systems are insufficient to meet the
security requirements of high-assurance software systems. Current
best practices typically rely on operating-system provided file
permissions for authorization and an ad-hoc combination of OS and
network-level (e.g. firewall-level) logging to generate audit trails.

This talk will describe work on a security-oriented programming
language called Aura that attempts to address this problem of
auditable information flows in a more principled way. Aura supports a
built-in notion of principal and its type system incorporates ideas
from authorization logic and information-flow constraints. These
features, together with the Aura run-time system, enforce strong
information-flow policies while generating good audit trails. These
audit trails record access-control decisions (such as uses of
downgrading or declassification) that influence how information flows
through the system. Aura's programming model is intended to smoothly
integrate information-flow and access control constraints with the
cryptographic enforcement mechanisms necessary in a distributed
computing environment.


This is joint work with Jeff Vaughan, Limin Jia, Karl Mazurak,
Jianzhou Zhou, Joseph Schorr, and Luke Zarko.

View original page

04 March 18:30Chip and PIN -- notes on a dysfunctional security system / Saar Drimer, Computer Lab

Lecture Theatre 1, Cambridge University Computer Laboratory, J J Thompson Avenue, Madingley Road, Cambridge

In the UK, Chip and PIN has been with us for over five
years. Originally promoted as a highly secure system, it turned out to
have many shortcomings -- some by design, and others by bad design.
The talk will introduce the EMV framework, describe known attacks --
including the latest "no-PIN" attack -- and discuss the contributing
factors that made them possible (poor incentives, sloppy regulation,
specification overload, design-by-committee, cross-border
interoperability, and others).

The talk is based on work done in collaboration with Steven Murdoch,
Ross Anderson and Mike Bond. More information and published papers can
be found here:

http://www.cl.cam.ac.uk/research/security/banking/

View original page

04 March 16:00Detecting Sybil attacks and recommending social contacts from proximity records / Daniele Quercia (University of Cambridge)

FW26, Computer Laboratory, William Gates Builiding

I’ll present two algorithms called MobID[1] and FriendSensing[2]. Using short-range technologies (e.g., Bluetooth) on their mobile phones, users keep track of other phones in their proximity. Upon proximity records, MobID identifies Sybil attackers in a decentralized way, and FriendSensing recommends social contacts:

- The idea behind MobID is that a device manages two small networks in which it stores information about the devices it meets: its network of friends contains honest devices, and its network of foes contains suspicious devices. By reasoning on these two networks, the device is then able to determine whether an unknown individual is carrying out a Sybil attack or not.

- FriendSensing processes proximity records using a variety of algorithms that are based on social network theories of geographical proximity and of link prediction. It then returns a personalized and automatically generated list of people the user may know.
We'll see how both algorithms perform against real mobility and social network data.

[1] Sybil Attacks Against Mobile Users: Friends and Foes to the Rescue. Infocom '10

[2] FriendSensing: Recommending Friends Using Mobile Phones. RecSys '09

View original page

19 February 17:30Risk, Security and Terrorism / Professor Lucia Zedner, University of Oxford

LMH, Lady Mitchell Hall

Biography

Lucia Zedner is a Law Fellow at Corpus Christi College, Professor of Criminal Justice in the Faculty of Law and a member of the Centre for Criminology at the University of Oxford. She wrote her doctorate and held a Prize Research Fellow at Nuffield College, Oxford before taking up a Law Lectureship at the London School of Economics when she helped found and became Assistant Director of the Mannheim Centre for Criminology and Criminal Justice. She returned to Oxford in 1994 becoming a Reader in 1999 and Professor in 2005. She held a British Academy Research Readership 2003-5 and has held visiting fellowships at universities in Germany, Israel, America, and Australia. Since 2007 she has also held the position of Conjoint Professor in the Law Faculty at the University of New South Wales, Sydney where she is a regular visitor.
Lucia Zedner's research interests span criminal justice, criminal law, and legal theory. From her first book on the history of imprisonment, she has gone on to write several books and many articles on criminal justice and penal policy, most recently focussing on aspects of risk, security and terrorism. Recent books include Criminal Justice (2004), Crime and Security (co-edited with Ben Goold, 2006) and Security (2009). With Oxford colleague, Professor Andrew Ashworth, she has been awarded a major AHRC grant to work on ‘Preventive Justice’ a project that will explore the politics and proliferating policies of risk and prevention; map changing patterns of criminalization and pre-emptive state action; consider their implications for civil liberties; and ask how far the state may go to prevent harm. Its ultimate aim is to develop principles and values to guide and limit states in their use of coercive preventive powers.

Abstract

Social scientists tell us we now live that we live in a ‘world risk society’. But what does this really mean and what, if anything, do environmental risks, health risks, and natural disasters have in common with those posed by terrorism? When we move from the natural world to human threats are we still dealing with hard science or are we in the realm of speculation? Are the presumptions behind risk based counter-terrorism policies and the profiling of terrorist suspects safe?
Terrorist acts are exceptionally rare but they pose the risk of catastrophic harm. No surprise then that we consent to intrusive measures that erode civil liberties in the name of avoiding such harms. The conceit of ‘balancing’ liberty and security assumes that by degrading liberty we can reduce risk. In place of balancing might we do better to ask what really is at risk in the war on terror?
We think of the risks posed by terrorism primarily in terms of subjective insecurity and threat to life and property. But countering terrorism carries its own risks – risks to social, political, and economic life and risks to rights (rights to freedom of speech, to privacy, and to freedom of the person). Add to this the risk of marginalising and alienating those we target and we are in danger of allowing responses to terrorism to generate a whole slew of new risks. So my question is what risks are at stake and how we might live with risk without living in terror.

View original page

21 January 14:15Proactive Fraud Management over Financial Data Streams / Pedro Sampaio, University of Manchester

Lecture Theatre 2, Computer Laboratory, William Gates Building

Fraud detection within financial platforms is a challenging area with fraud events cutting across multiple financial products, service channels and geographical locations. Fraudsters are continuously seeking security flaws, and rapidly reengineering fraud methods to gain unauthorized access to accounts and execute illegal transactions. Fraud detection systems employed by financial institutions have to respond to new threats by shortening fraud detection points, increasing the accuracy of customer profiling methods, and rapidly deploying new policies to address loopholes. This talk will identify challenges and opportunities linked to proactively managing financial fraud through real-time evaluation of financial data streams. In particular, the talk will discuss how fraud management frameworks and architectures designed using stream-based platforms and policy based languages can be leveraged to increase the effectiveness of fraud management.

View original page

19 January 16:15The impact of incident vs. forensic response / Andrew Sheldon (Evidence Talks)

Lecture Theatre 2, Computer Laboratory, William Gates Building

This presentation identifies what steps an organisation needs to take in order to mitigate information risk, reputational damage and financial losses. Using real world examples, it will educate users how simple security measures can protect key assets, how internet monitoring can provide an effective early warning of business threat and, critically, how to take effective forensic incident response actions.

This presentation also looks at the top ten mistakes made by companies when reacting to an incident involving their digital environment and provides clear advice about how you should react when your critical data is compromised or you need to investigate a digital incident.

View original page

18 January 15:00Anonymity via networks of mixes / Venkat Anantharam, EECS Department, University of California Berkeley.

MR5, CMS, Wilberforce Road, Cambridge, CB3 0WB

Mixes are relay nodes that accept packets arriving
from multiple sources and release them after variable delays to prevent an eavesdropper from associating outgoing packets to their sources. We assume that each mix has a hard latency constraint. Using an entropy-based measure to
quantify anonymity, we analyze the anonymity provided by
networks of such latency-constrained mixes.
Our results are of most interest under
light traffic conditions. A general upper bound is presented that
bounds the anonymity of a single-destination mix network in
terms of a linear combination of the anonymity of two-stage
networks. By using a specific mixing strategy, a lower bound
is provided on the light traffic derivative of the anonymity of
single-destination mix networks. The light traffic derivative of
the upper bound coincides with the lower bound for the case of
mix-cascades (linear single-destination mix networks).

Bio:

Venkat Anantharam received the B.Tech in Electronics in 1980
from the Indian Institute of Technology, Madras (IIT-M)
and the M.A. and C.Phil degrees in Mathematics and the
M.S. and Ph.D. degrees in Electrical Engineering in
1983, 1984, 1982 and 1986 respectively,
from the University of California at Berkeley (UCB).
From 1986 to 1994 he was on the faculty of the School of
EE at Cornell University. From 1994 he has been on the faculty of the
EECS department at UCB.

Anantharam received the Philips India Medal and the
President of India Gold Medal from IIT-M in 1980, and an NSF Presidential
Young Investigator award (1988 -1993). He a co-recipient of
the 1998 Prize Paper award of the IEEE Information Theory Society
(with S. Verdú) and a co-recipient of the 2000 Stephen O. Rice Prize Paper
award of the IEEE Communications Theory Society
(with N. Mckeown and J. Walrand).
He received the Distinguished Alumnus Award from IIT-M in 2008.
He is a Fellow of the IEEE.

2009

View original page

10 December 16:00Security Architectures for Distributed Social Networks / Jonathan Anderson (University of Cambridge)

FW26, Computer Laboratory, William Gates Builiding

Current practice in social networks requires users to give all of their personal information to a centralized provider, one which may have little competence in security and little incentive to change.
Distributing responsibility for information security to client software solves some problems, but others remain. This talk will describe recent work, current research and future aspirations for privacy-enabling social networking technology.

Bio: Jonathan Anderson is a PhD student in the Security Group. His research focuses on security architectures for the controlled disclosure of user information, especially in the contexts of social networks and operating systems.

View original page

04 December 10:00MPhil Mini-Symposium Security Talks / MPhil students, University of Cambridge

Lecture Theatre 2, Computer Laboratory, William Gates Building

Five MPhil students will each present a recent security paper for 10 minutes as part of the MPhil Mini-symposium:

http://www.cl.cam.ac.uk/teaching/0910/C00/minisymp/programme.htm

10:00 Omar-Salim Choudary (osc22)

“Optimised to Fail: Card Readers for Online Banking”, Saar Drimer, Steven J. Murdoch, and Ross Anderson, Financial Cryptography and Data Security '09

10:13 Roland Tai (ykrt2)

“RFIDs and secret handshakes: defending against ghost-and-leech attacks and unauthorized reads with context-aware communications”, A. Czeskis, K. Koscher, J. R. Smith, and T. Kohno, ACM CCS '08 (Computer and Communications Security)

10:26 Alexandros Toumazis (at443)

“Tempest in a Teapot: Compromising Reflections Revisited”, M. Backes, T. Chen, M. Duermuth, H. P. A. Lensch, and M. Welk, IEEE Symposium on Security and Privacy 2009

10:39 Danish Zeb (dz245)

“Time and Location Based Services with Access Control”, C. Bertolissi and M. Fernandez, IEEE NTMS '08 (New Technologies, Mobility and Security)

10:52 Guanwei Zeng (gz233)

“Privacy-enabling social networking over untrusted networks”, J. Anderson, C. Diaz, J. Bonneau and F. Stajano, WOSN ’09 (Online Social Networks)

View original page

17 November 16:15Understanding scam victims: Seven principles for systems security / Frank Stajano, University of Cambridge

Lecture Theatre 2, Computer Laboratory, William Gates Building

The success of many attacks on computer systems can be traced back to the security engineers not understanding the psychology of the system users they meant to protect. We examine a variety of scams and "short cons" that were investigated, documented and recreated for the BBC TV programme "The Real Hustle" and we extract from them some general principles about the recurring behavioural patterns of victims that hustlers have learnt to exploit.

We argue that an understanding of these inherent "human factors" vulnerabilities, and the necessity to take them into account during design rather than naïvely shifting the blame onto the "gullible users", is a fundamental paradigm shift for the security engineer which, if adopted, will lead to stronger and more resilient systems security.

You can read the full tech report here:

http://www.cl.cam.ac.uk/techreports/UCAM-CL-TR-754.pdf

View original page

10 November 14:30The Elephant in the Room: Health Information System Security and the User-Level Environment / Juanita Fernando (Monash University)

Room FW11, Computer Laboratory, William Gates Building

*Slides "available":http://www.cl.cam.ac.uk/research/srg/opera/meetings/attachments/2009-11-10-HealthInfSystem_Fernando.pdf .*

*Abstract:*

The patient care context comprises outdated
infrastructure, pervasive computer use, shared
clinical workspace, aural privacy shortcomings,
interruptive work settings, confusing legislation,
poor privacy and security (P&S) eHealth training
outcomes and inadequate budgets.
Twenty three medical, nursing and allied health
clinicians working in Australia (Victoria)
participated in qualitative research examining work
practices with P&S for patient care. They criticised
a slow, inefficient eHealth information system (eHIS)
environment permeated by usability errors. EHealth
systems expanded workloads and system demands
were onerous, increasing the clinicians’ scepticism
of reliance on information technology. Consequently
many clinicians had developed trade-offs to avoid
reliance an eHIS.
The trade-offs include IT support avoidance and
shared passwords to PKI and computer accounts.
Handover-sheets populated by transcribed notes
were circulated between all clinicians present. The
practices ensure paper persistence and escalate
P&S threats to data confidentiality, integrity and
availability. Study evidence suggests poor eHISs
hamper patient care and may represent a larger
P&S threat than indicated by studies to date

*Bio:*

I'm interested in all aspects of health information system security. My research concerns clinical health informatics, bioinformatic data exchange standards and information security. I've developed a particular emphasis on e-health tools and their contribution to workflow methodologies in the health sector.

A member of the Mobile Health Research Group (MHRG), I work very closely with colleagues at Information Technology.

Useful sources of information on health information security and privacy are widely scattered. The web page published by the Australian Privacy Foundation (APF) is a notable exception. I chair the Health Sub Committee and love the work I do with them.

My professional memberships include the Australian College of Health Informatics (ACHI) , the Health Informatics Society of Australia (HISA) and the Australian Health Informatics Education Council (AHIEC).

I am the Academic Convenor, the Honours Degree of Bachelor of Medical Science, Medicine, Nursing & Health Sciences at Monash University. Academic oversight of students enrolled for a BMedSc(Hons)degree is challenging but fun and I'm fascinated by the research programs on which they work.

More "info":http://users.monash.edu.au/~juanitaf/

View original page

02 November 14:00Surveillance in Speculative Fiction: Have Our Artists Been Sufficiently Imaginative? / Roger Clarke, University of New South Wales

FW11

There are many variants of surveillance, many pitfalls, and potentially serious consequences for 'good people' as well as 'the baddies'. Fiction-writers of all kinds have taken advantage of the enormous scope this provides. Writers of speculative fiction have been running ahead of reality for decades; but they need to display more imagination, because reality keeps catching up with them. This paper reviews speculative fiction genres and imaginations, and uses them as a means of identifying several different interpretations of what the surveillance epidemic means for privacy and human freedom.

Roger Clarke is a Canberra-based eBusiness consultant, and a Visiting Professor in Cyberspace Law & Policy at UNSW in Sydney, and in Computer Science at the Australian National University. He has conducted dataveillance research since the early 1980s, and has been active in privacy advocacy even longer than that. He is currently Chair of the Australian Privacy Foundation.

View original page

28 October 14:15Aggregated Security Monitoring in 10GB networks / Nathan Macrides and Nick McKenzie - Security Engineering, RBS

Lecture Theatre 1, Computer Laboratory

The use of aggregation switch technology to perform 'out-of-band' network security monitoring (forensics, data leakage, intrusion detection) in high volume distributed network infrastructure.

Short bio: Nathan Macrides

Nathan graduated from RMIT University in Melbourne, Australia, with Degrees in Computer Science (B App Sci) and Computer Systems Engineering (B Eng) in 2003. Since then, he has been employed in IT as a Solutions Engineer with a focus on Security. He has worked fro various organisation including the European Bank for Reconstruction and Development and BAA and currently works within the Global Banking & Markets Division of RBS.

Short bio: Nick McKenzie

Nick graduated from Curtin University of Technology in Perth, Australia, wit a Bachelor of Commerce (Information Systems) (BCom (IS)) and a Master of E-Commerce (MeCom). He has since been employed by various consulting and financial institutions in Australia and the UK focusing on security assessments, design and architecture reviews. He is currently Head of Security Architecture and Engineering and IS&C Project Support for RBS Global Banking & Markets.

View original page

26 October 16:15Policing Online Games -- Defining A Strong, Mutually Verifiable Reality in Virtual Games / Peter Wayner

Lecture Theatre 2, Computer Laboratory, William Gates Building

Ronald Reagan was fond of saying "trust but verify". Alas, the modern virtual world of games gives users no basis for trust and no mechanism for verification. Foolish people wager real money on a hand of cards dealt to them by an offshore server. Some of the virtual worlds have been rocked by scandals where sys admins take bribes to add special features like unbeatable armor to favored players.

The good news is that we can build strong, virtual worlds that give users the basis for trust and the ability to verify the fairness of a game. The algorithms are well-known and tested, to some extent, by time. This talk will revue a number of the classic results designed for playing poker or distributing digital cash, and explore how they can be used to stop some of the most blatant cheating affecting the more sophisticated online world.

View original pageView slides/notes

13 October 16:15Optical surveillance on silicon chips: your crypto keys are visible / Sergei Skorobogatov

Lecture Theatre 2, Computer Laboratory, William Gates Building

This talk presents a low-cost approach to optical side-channel attacks on secure semiconductor chips. By using an inexpensive CCD camera to monitor the emission from operating chip, information stored in SRAM, EEPROM and Flash was successfully recovered. Initially demonstrated on a 0.9-micron microcontroller, this technique was later adapted for a 0.13-micron secure FPGA with AES decryption engine used for code protection. This shows the danger of optical emission analysis attacks to modern deep-submicron chips. Optical emissions from an operating chip also have a good correlation with power analysis traces and can therefore be used to estimate the contribution of different areas within the chip. Optical emission analysis can also be used for partial reverse engineering of the chip structure by spotting the active areas. This can assist in carrying out optical fault injection attacks later, thereby saving the time otherwise required for exhaustive search. Practical limits for optical emission analysis in terms of sample preparation, operating conditions and chip technology will be discussed. Like with the introduction of probing attacks in the mid-1990s, power analysis attacks in the late 1990s and optical injection attacks in the early 2000s, optical emission attacks will very likely result in the need to introduce new countermeasures during the design of semiconductor chips.

View original page

15 September 16:15So Long, And No Thanks for the Externalities: The Rational Rejection of Security Advice by Users / Cormac Herley, Microsoft Research, Redmond

Lecture Theatre 2, Computer Laboratory, William Gates Building

The failure of users to follow security advice has often been noted. They chose weak passwords, ignore security warnings, and are oblivious to certificates. It is often suggested that users are hopelessly lazy and unmotivated on security questions. We argue that users' rejection of the security advice they receive is entirely rational from an economic perspective. As with many activities, online crime generates direct losses and externalities. The advice offers to shield them from the direct costs of attacks, but burdens them with the indirect costs, or externalities. Since the direct costs are generally small relative to the indirect ones, they reject this bargain. We examine three areas of user education: password rules, phishing site identification, and SSL certificates. In each we find that the advice is complex and growing, but the benefit is largely speculative or moot. In the cases where we can estimate benefit, it emerges that the burden of following the security advice is actually greater than the direct losses caused by the attack.

Bio:
Cormac Herley is a Principal Researcher at Microsoft Research. His main current interests are data and signal analysis problems that reduce complexity and help users avoid harm. He's been at MSR since 1999, and before that was at HP where he headed the company's currency anti-counterfeiting efforts. Some of his recent published work has focused on problems of passwords and authentication, the economics of cybercrime, phishing prevention technologies and keylogger resistant access to existing web accounts.

He received the PhD degree from Columbia University, the MSEE from Georgia Tech, and the BE(Elect) from the National University of Ireland. He is a former adjunct at UC Berkeley, has authored more than 50 peer reviewed papers, is inventor of 70 or so US patents (issued or pending) and has shipped technologies used by tens of millions of users.

"Web page":http://research.microsoft.com/en-us/people/cormac/

View original pageView slides

30 June 16:15Efficient Implementation of Physical Random Bit Sources / Richard Newell, Actel

Lecture Theatre 2, Computer Laboratory, William Gates Building

Two circuit blocks often needed in conjunction with a physical random
bit source are a Conditioner and a Health Monitor. The Health monitor
is used to make sure that the physical source is generating sufficient
entropy and hasn't failed. The Conditioner concentrates the available
entropy and ensures that its output is statistically indistinguishable
from true random numbers. Efficient examples of both circuits are
proposed that are suitable for FPGA implementations.

View original page

29 May 11:00Online Social Networks and Applications: a Measurement Perspective / Ben Zhao (UCSB)

FW26, Computer Laboratory, William Gates Builiding

With more than half a billion users worldwide, online social networks such as Facebook are popular platforms for interaction, communication and collaboration between friends. Researchers have recently proposed an emerging class of Internet applications that integrate relationships from social networks to improve security and performance. But can these applications be effective in real life? And if so, how can we predict their effectiveness when they are deployed on real social networks?

In this talk, we will describe recent research that tries to answer these questions using measurement-based studies of online social networks and applications. Using measurements of a socially-enhanced web auction site, we show how social networks can actually reduce fraud in online transactions. We then discuss the evaluation of social network applications, and argue that existing methods using social graphs can produce to misleading results. We use results from a large-scale study of the Facebook network to show that social graphs are insufficient models of user activity, and propose the use of "interaction graphs" as a more accurate model. We construct interaction graphs from our Facebook datasets, and use both types of graphs to validate two well-known social-based applications (Reliable Email and SybilGuard). Our results reveal new insights into both systems and confirm our hypothesis that choosing the right graph model significantly impacts predictions of application performance.

Bio: Ben Zhao is a faculty member at the Computer Science department, U.C. Santa Barbara. Before UCSB, he completed his M.S. and Ph.D. degrees in Computer Science at U.C. Berkeley, and his B.S. from Yale University. His research interests include networking, security and privacy and distributed systems.
He is a recipient of the National Science Foundation's CAREER award, MIT Tech Review's TR-35 Award (Young Innovators Under 35), and is one of ComputerWorld's Top 40 Technology Innovators.

View original page

18 May 14:00(Research) Bluetooth Tracking without Discoverability / (Skills) Deploying web user authentication with Shibboleth / Simon Hay and Sören Preibusch

FW26, William Gates Building

Research: Bluetooth Tracking without Discoverability, Simon Hay


Outdoor location-based services are now prevalent due to advances in mobile technology and GPS. Indoors, however, even coarse location remains unavailable. Bluetooth has been identified as a potential location technology that mobile consumer devices already support,easing deployment and maintenance. However, Bluetooth tracking systems to date have relied on the Bluetooth inquiry mode to constantly scan for devices. This process is very slow and can be a security and privacy risk. In this paper we investigate an alternative: connection-based tracking. This permits tracking of a previously identified handset within a field of fixed base stations. Proximity is determined by creating and monitoring low-level Bluetooth connections that do not require authorisation. We investigate the properties of the low-level connections both theoretically and in practice, and show how to construct a building-wide tracking system based on this technique. We conclude that the technique is a viable alternative to inquiry-based Bluetooth tracking.


Skills: Deploying web user authentication with Shibboleth, Sören Preibusch


Shibboleth is a set of policies and protocols providing an access control system for web-based resources. It is similar to that currently provided by Raven, but extended and standardised to allow users from multiple organisations to access resources provided by other independent organisations. Compared to Raven, Shibboleth involves a higher implementation effort, yet supports a broader range of platforms for deployment. Service providers can define more fine-grained rules for access control and the identity of authenticated users need not be disclosed (privacy-preserving single-sign on).

This talk is intended for Web authors and developers envisioning to set up user authorisation and authentication. I will briefly review the architecture and underlying Web service infrastructure for Shibboleth and sketch typical deployment scenarios. More prominently, I will share my own experiences in becoming the owner of the first Shibboleth-protected web site in the University.

View original page

14 May 16:00PeerSoN: Privacy-Preserving P2P Social Networks / Sonja Buchegger (Deutsche Telekom Laboratories)

FW26, Computer Laboratory, William Gates Builiding

Online Social Networks like Facebook, MySpace, Xing, etc. have become extremely popular. Yet they have some limitations that we want to overcome for a next generation of social networks: privacy problems and requirements of Internet connectivity, both of which are due to web-based applications on a central site whose owner has access to all data.

To overcome these limitations, we envision a paradigm shift from client-server to a peer-to-peer infrastructure coupled with encryption so that users keep control of their data and can use the social network also locally, without Internet access. This shift gives rise to many research questions intersecting networking, security, distributed systems and social network analysis, leading to a better understanding of how technology can support social interactions.

Our project, PeerSoN, consists of several parts. One part is to build a peer-to-peer infrastructure that supports the most important features of online social networks in a distributed way. We have written a first prototype to test our ideas. Another part is concerned with encryption, key management, and access control in such a distributed setting. Extending the distributed nature of the system, we investigate how to integrate such peer-to-peer social networking with ubiquitous computing and delay-tolerant networks, to enable direct exchange of information between devices and to take into account local information. http://www.peerson.net

Bio: Sonja Buchegger is a senior research scientist at Deutsche Telekom Laboratories, Berlin. In 2005 and 2006, she was a post-doctoral scholar at the University of California at Berkeley, School of Information. She received her Ph.D. in Communication Systems from EPFL, Switzerland, in 2004, a graduate degree in Computer Science in 1999, and undergraduate degrees in Computer Science in 1996 and in Business Administration in
1995 from the University of Klagenfurt, Austria. In 2003 and 2004 she was a research and teaching assistant at EPFL and from 1999 to 2003 she worked at the IBM Zurich Research Laboratory in the Network Technologies Group. Her current research interests are economics, security, and privacy of self-organized networks.

View original pageView slides/notes

12 May 16:15Whither Challenge Question Authentication? / Mike Just, University of Edinburgh

Lecture Theatre 2, Computer Laboratory, William Gates Building

Questions such as "What is your mother's maiden name?" and "What was your first pet's name?" are commonly used today to authenticate users, often in support of account recovery when a password is forgotten. Despite their ubiquity, there exists very little published research as to their efficacy. Some recent, high profile compromises suggest that they are not sufficiently secure. Recent research seems to point to a similar conclusion. In this talk, I'll examine some of the recent research into the security and usability of challenge question authentication, and discuss whether it remains a viable option for user authentication.

View original page

30 April 17:00A conversation with Phil Zimmermann / Phil Zimmerman

Lecture Theatre 2

Phil Zimmermann is a veteran of the crypto wars of the 1990s, when governments tried to ban and then to control cryptography. After he wrote Pretty Good Privacy (PGP), which he made available online in 1991, he was arraigned before a grand jury on suspicion of violating export-control laws. PGP became the most widely-used encryption program in the world and the US government dropped its case in 1996. That attempt to control crypto petered out during the dotcom boom and was abandoned by Al Gore during the 2000 election.

But the surveillance state has constantly reinvented itself, from the illegal wiretapping of US citizens under George W. Bush to the proliferation of CCTV cameras in Britain and - now - the Interception Modernisation Program.

This rising tide of surveillance since 9/11 brought Phil back into the business of crypto activism with Zfone, a secure VOIP program.

This meeting will be structured not so much as a lecture but a conversation, which will range over the technology and policy of crypto wars old and new.

View original page

28 April 16:15Bypassing Physical Security Systems / Marc Weber Tobias, Investigative Law Offices

Lecture Theatre 2, Computer Laboratory, William Gates Building

The presentation will include a detailed review regarding the protection of high security facilities, including airports and aircraft, power transmission facilities, and computer server rooms. The emphasis will be on liability and security issues that may result from an undue reliance on certain high security locking systems and technology. I will discuss a number of misconceptions and why these facilities may be at risk, even with some of the most sophisticated physical access hardware and software.

Specific problems inherent in conventional locking hardware will be the primary focus, together with an analysis of high security mechanical locks and electronic access control systems produced by many of the Assa Abloy companies. These technologies include the Cliq, Logic, and NexGen among others. The representations of certain manufacturers will be analyzed, and potential vulnerabilities in these high-tech systems will be explored, together with the liability that may flow to users if these systems are circumvented.

Since the publication of _OPEN IN THIRTY SECONDS_, which details the compromise of Medeco high security locks (2008), intensive research has been on-going in the U.S. and Europe regarding the security of different electronic access control systems. The results will be included in the new supplement to our book. These potential security issues will be examined in Dubai and will be explored in depth in the upcoming supplement, and later this year in future presentations.

Ross Anderson wrote the forward for Mr. Tobias' book, which can be purchased here:

http://www.amazon.com/OPEN-THIRTY-SECONDS-Cracking-America/dp/0975947923/

View original page

27 April 14:00(Research) OpenRoomMap: Mapping the Gates building / (Skills) Best Papers Lent 2009 / Andrew Rice and DTG members

FW26, William Gates Building

Research: OpenRoomMap

In this talk I will outline the goals of the OpenRoomMap project and discuss our initial trial mapping of the William Gates Building. Please have a look (and do some mapping) at http://www.cl.cam.ac.uk/research/dtg/openroommap before the talk.


Skills: Best Papers Lent 2009

Each member of the group will submit an entry for the best paper they have read this term. We will have very a brief presentation on as many as we can fit in to 30 minutes.

View original page

07 April 16:15Cloning MiFare Classic rail and building passes, anywhere, anytime / Nicolas Courtois, University College London

Lecture Theatre 2, Computer Laboratory, William Gates Building

MiFare Classic is the most popular contactless smart card with some 200 millions copies in circulation world-
wide. At Esorics 2008 Dutch researchers showed that the underlying cipher Crypto-1 can be cracked in as
little as 0.1 seconds if the attacker can eavesdrop the RF communications with the (genuine) reader.
We discovered that a MiFare classic card can be cloned in a much more practical card-only scenario, where
the attacker only needs to be in the proximity of the card for a number of minutes, therefore making usurpation
of identity through pass cloning feasible at any moment and under any circumstances. For example, anybody
sitting next to the victim on a train or on a plane is now be able to clone his/her pass. Other researchers
have also (independently from us) discovered this vulnerability (Garcia et al., 2009) however our attack is
different and does not require any precomputation. In addition, we discovered that a yet unknown proportion
of MiFare Classic cards are even weaker, and we have in our possession a MiFare Classic card from a large
Eastern-European city that can be cloned in seconds.

Paper: http://eprint.iacr.org/2009/137

View original page

26 March 16:00Pointless Tainting? Evaluating the practicality of pointer tainting / Asia Slowinska (Vrije Universiteit Amsterdam)

FW26, Computer Laboratory, William Gates Builiding

This talk evaluates pointer tainting, an incarnation of Dynamic Information Flow Tracking (DIFT). Pointer tainting has been used for two main purposes: detection of privacy-breaching malware (e.g., trojan keyloggers obtaining the characters typed by a user), and detection of memory corruption attacks against non-control data (e.g., a buffer over?ow that modi?es a user’s privilege level). The technique is considered one of the only methods for detecting them in unmodi?ed binaries. Unfortunately, almost all of the incarnations of pointer tainting are ?awed. We found that pointer tainting generates itself the conditions for false positives. We analyse the problems in detail and investigate various ways to improve the technique. Most have serious drawbacks in that they are either impractical (and incur many false pos- itives still), and/or cripple the technique’s ability to detect attacks. We argue that depending on architecture and operating system, pointer tainting may have some value in detecting memory corruption attacks (albeit with false negatives and not on the popular x86 architecture), but it is not suitable for automated detecting of privacy-breaching malware such as keyloggers.


Bio: Asia Slowinska is a third-year PhD student at the Vrije Universiteit Amsterdam. Her research concerns intrusion detection, signature generation, and honeypots. Currently she's interning with MSRC.

View original pageView slides/notes

24 March 16:15Privacy Implications of Public Listings on Social Networks / Joseph Bonneau, University of Cambridge

Lecture Theatre 2, Computer Laboratory, William Gates Building

The popular social networking website Facebook exposes a
“public view” of user profiles to search engines which includes eight of the user’s friendship links. This talk will examine what interesting properties of the complete social graph can be approximated from this public view. In experiments on real social network data, we were able to accurately approximate the degree and centrality of nodes, compute small dominating sets, find short paths between users, and detect community structure. This work demonstrates that it is difficult to safely reveal limited information about a social network.

Full paper:

http://www.cl.cam.ac.uk/~jcb82/8_friends_paper.pdf

View original page

19 March 15:00High Assurance Smart Cards for Multinational Coalitions and Other Applications of National Security / Paul Karger, IBM Watson Research Center

Lecture Theatre 2, Computer Laboratory, William Gates Building

Caernarvon is a high-assurance secure operating system for smart cards, designed to pass the highest levels (EAL7) of the Common Criteria. It includes a multi-organizational mandatory access control model that is designed to provided both security and integrity controls that can scale to cover the entire Internet. These multi-organizational controls can make it much easier to implement applications for multi-national military, electronic visas that could be stored on the same smart card chip as is used for electronic passports.

View original pageView slides

10 March 16:15The Effectiveness of T-way Test Data Generation / Michael Ellims

Lecture Theatre 2, Computer Laboratory, William Gates Building

This talk reports the results of a study comparing the effectiveness of automatically generated tests constructed using random and t-way combinatorial techniques on safety related industrial code using mutation adequacy as the acceptance metric. A reference point to current best practice is provided by using hand generated test vectors constructed during development to establish minimum acceptance criteria. The study shows that 2-way adequate test sets are not adequate as measured by the mutants kill rate compared with hand generated test sets of similar size, but that higher factor t-way test sets can perform at least as well. To reduce the computation overhead of testing large numbers of vectors over large numbers of mutants a staged optimising approach to applying t-way tests is proposed and evaluated which shows improvements in execution time and final test set size.

View original page

06 March 14:00Securing Virtual Machine Monitors: What is Needed? / Paul Karger (IBM Research - Watson)

FW26, Computer Laboratory, William Gates Builiding

While many people view virtual machine monitors as something special and different, in realty they are just special purpose operating systems. The major difference is that the API to a virtual machine monitor is the instruction set of the virtual machine, while the API to an operating system is a set of system calls to manipulate processes, file systems, perform I/O, etc. To the extent that a particular VMM uses
paravirtualization, it begins to look more like a classical operating system than a VMM -- and just like operating systems, VMMs can have exploitable security vulnerabilities.

This talk will discuss the myths and reality behind virtualization and security, and look at what is needed to build truly secure VMMs.

View original page

27 February 16:00Mobile malware prevention using temporal information / John Tang, Computer Laboratory

Computer Laboratory, William Gates Building, Room FW11

Abstract not available

View original page

20 February 16:00Security lessons from embedded devices / Philip Paeps, FreeBSD security team

Computer Laboratory, William Gates Building, Room FW11

Embedded devices are everywhere. As more and more of them are starting to become networked, devices which traditionally had no security requirements are now having to take into account quite sophisticated threat models. In a world where time to market is critical and deployments happen in very large volumes, engineers are often having to learn about security in zero-time while they work.

This talk will discuss some of the technical security measures that are being taken in the embedded world from the perspective of the people implementing them.

View original page

18 February 14:15Privacy and HCISec: Notes From The Front / Alma Whitten - Google

Lecture Theatre 1, Computer Laboratory

Is engineering privacy an HCISec challenge? Is designing to give users transparency and choice on privacy the same kind of problem as designing to make security usable? Alma will discuss observations drawn from her experiences on the front lines of some of today's most interesting privacy debates.

View original page

12 February 16:00Wedge: Splitting Applications into Reduced-Privilege Compartments / Andrea Bittau (UCL)

SS03, Computer Laboratory, William Gates Builiding

Most applications today run as single processes, allowing successful attackers to access all of the process's memory and sensitive data. We intend to reverse this situation by splitting applications into multiple compartments that hold no privileges by default, and allowing programmers to explicitly grant privileges and memory permissions, therefore controlling the damage of potential exploits.
Our system Wedge is composed of two synergistic parts: the sthread OS primitives that allow programmers to create default-deny compartments with explicitly set privileges, and Crowbar, a tool that run-time analyzes existing applications to help identify potential sthreads along with their required memory and file descriptor permissions, allowing a simpler migration of existing code to sthreads. We applied sthreads to SSL-enabled Apache protecting the privacy of user data even against a powerful attacker can both exploit large part of the server and also act as a man-in-the-middle in the network; all at a 20--40% performance cost. Finally we describe a userland implementation of sthreads that does not sacrifice performance thanks to the careful (ab)use of UNIX APIs.

Bio: Andrea Bittau is a PhD student at UCL working on operating system support for application security, supervised by Mark Handley and Brad Karp. His past projects include the fragmentation attack for 802.11 WEP networks, where an attacker can spoof and eavesdrop data without needing the WEP key, and developing the first open source Bluetooth sniffer, based on GNU radio.

View original pageView slides/notes

03 February 16:15Privacy-Preserving 802.11 Access-Point Discovery / Janne Lindqvist - Helsinki University of Technology, Finland

Lecture Theatre 2, Computer Laboratory, William Gates Building

It is usual for 802.11 WLAN clients to probe actively for access points in order to hasten AP discovery and to find “hidden” APs. These probes reveal the client’s list of preferred networks, thus, present a privacy risk: an eavesdropper can infer attributes of the client based on its associations with networks. We propose an access-point discovery protocol that supports fast discovery and hidden networks while also preserving privacy. Our solution is incrementally deployable, efficient, requires only small modifications to current client and AP implementations, interoperates with current networks, and does not change the user experience. We note that our solution is faster than the standard hidden-network discovery protocol based on measurements on a prototype implementation.

View original page

03 February 14:30An overview of the Smart Flow project / David Eyers (University of Cambridge)

Room FW11, Computer Laboratory, William Gates Building

*Slides "available":http://www.cl.cam.ac.uk/research/srg/opera/meetings/priv-attachments/2009_02_03_SmartFlow_Eyers.pdf .*
_(only available for CL members)_

*Abstract:*

This talk will provide an overview of the Smart Flow project. The project has participants from the Computer Laboratory, Imperial College, and the Eastern Cancer Registration and Information Centre.

Smart Flow aims to examine issues of reconfigurability, policy, and security in distributed event based systems, with a particular focus on managing flows of healthcare-related data.

We will discuss some of our initial work into a distributed information flow model that is tuned toward use within event based systems.

View original pageView slides

28 January 14:15A Framework for the Analysis of Mix-Based Steganographic File Systems / Claudia Diaz - Department of Electrical Engineering (ESAT), K.U.Leuven, Belgium

Lecture Theatre 1, Computer Laboratory

The goal of Steganographic File Systems (SFSs) is to protect users from coercion attacks by providing plausible deniability on the existence of hidden files. We consider an adversary who can monitor changes in the file store and use this information to look for hidden files when coercing the user. We outline a high-level SFS architecture that uses a local mix to relocate files in the remote store, and thus prevent known attacks that rely on low-entropy relocations. We define probabilistic metrics for unobservability and (plausible) deniability, present an analytical framework to extract evidence of hidden files from the adversary’s observation (before and after coercion,) and show in an experimental setup how this evidence can be used to reduce deniability.
This work is a first step towards understanding and addressing the security requirements of SFSs operating under the considered threat model, of relevance in scenarios such as remote stores managed by semi-trusted parties, or distributed peer-to-peer SFSs.

Full paper:

https://www.cosic.esat.kuleuven.be/publications/article-1051.pdf

View original page

27 January 16:15Improving Cache Performance while Mitigating Software Side-Channel Attacks / Ruby Lee - Princeton University

Lecture Theatre 2, Computer Laboratory, William Gates Building

Improving the security of computers has traditionally been associated with degrading performance. Princeton researchers show a rather surprising result where both security and performance can be improved by rethinking cache architecture. Cache subsystems bridge the speed gap between processors and main memory, and are essential for improving the performance of computer systems. However, the fundamental difference in cache hit versus miss timing can be exploited to leak secret information, such as the cryptographic keys of AES and RSA ciphers. Almost all computers are vulnerable to these software side-channel attacks. Software solutions are algorithm-specific, do not apply to legacy programs and severely degrade performance. A generic hardware solution that applies to all software, does not degrade performance, and prevents all access-based cache side-channel attacks, is desirable. New security-aware cache architectures, presented in ISCA2007, were the Random Permutation cache (RPcache) and the Partition Locked cache (PLcache). A novel cache architecture (Micro2008) is presented that not only improves security, but also improves performance, achieving the best cache access time, miss-rate and power consumption of existing classes of cache architectures. Fault-tolerance, hot-spot mitigation and flexible partitioning are additional benefits.

View original pageView slides/notes

20 January 16:15Hardware security: trends and pitfalls of the past decade / Sergei Skorobogatov - Computer Laboratory, University of Cambridge

Lecture Theatre 2, Computer Laboratory, William Gates Building

It has been a long time since the hardware security problems in semiconductor
chips were brought to light by Ross Anderson and Markus Kuhn in the late
nineties, followed shortly by Markus Kuhn and Oliver Kommerling paper on the
forefront attack technologies used for breaking smartcards. Now it seems quite
logical to look at the progress made in this area as a whole decade has passed
since that time. The defence technology has been significantly improved, but
the attackers did not sit idle and made some progress as well. The question is
whether the lessons were learned and whether we have significantly better
hardware security protection in semiconductor devices around us today.

The purpose of this talk is not only in summarising achievements at both the
attack and defence ends. I will raise some concerns about certain security
failures, point out common mistakes made by chip manufacturers and discuss
possible roots of such problems. Finally, I will try to project the trend of
hardware security area into the nearest future.

View original page

16 January 16:00How secure is my messaging protocol for clinical communication? / Mohammed Al-Ubaydli

Computer Laboratory, William Gates Building, Room FW11

I am working on a secure messaging protocol for patients and clinicians. At the moment, patients are
sending their questions over e-mail to NHS clinicians and the clinicians are forced to either ignore the
questions - because of the insecurity of the medium - or send clinical information in the clear - because of trying to serve the patient’s immediate clinical needs.

I am hoping to offer a better service that is more secure but minimizes impact on clinicians’ workflow,
i.e. by allowing them to continue to use their NHS e-mail. I need to know from the group:
# how technically secure is this protocol?
# where are the social engineering vulnerabilities?
# are vulnerabilities low enough to allow adopting this protocol as an improvement over existing workflow?

By way of background, my name is Mohammad (www.mo.md) and I trained as a physician at Cambridge University and a programmer at Anglia Ruskin University. I wrote six books about the use of IT in health care but have no expertise in security so was hoping to benefit from the Friday security group meetings.

View original page

13 January 16:15Identity Theft and the Mobile Device / Andy Jones - Head of Information Security Research, Centre for Information & Security Systems Research, BT Innovate

Lecture Theatre 2, Computer Laboratory, William Gates Building

This presentation will cover the results of research into the quantity and type of information that we give away when we dispose of mobile phones and PDAs and the threat that this poses with regard to identity theft and criminal use of the information.

2008

View original pageView slides/notes

09 December 16:15Bayesian Inference and Traffic Analysis / Carmela Troncoso, Microsoft Research Cambridge/KU Leuven(COSIC)

Lecture Theatre 2, Computer Laboratory, William Gates Building

Traffic analysis attacks on anonymity networks were for long based on heuristics that allow an attacker to uncover communication partners under specific assumptions. However, slight changes in the model would render the methods useless. We present a general model for the analysis of mix networks which captures characteristics of anonymity systems subject to constraints while being able to accommodate most previously proposed attacks. Furthermore, we show how this model can be used to obtain the probabilities of who speaks with whom through the use of Bayesian Inference techniques and Markov Chain Monte Carlo simulations.

View original page

02 December 16:15Talking to strangers / Bruce Christianson, University of Hertfordshire

Lecture Theatre 2, Computer Laboratory, William Gates Building

Access Control is conventionally built on top of authentication. This approach is problematic when several different security policy domains are involved. Authenticating across domain boundaries requires contending with different policies (and mechanisms) for identity management, delegation and revocation of authorization, etc. Additional issues in pervasive computing include the lack of transitive infrastructure and the promiscuity of casual device interactions.

This talk will describe an approach to localizing the trust assumptions required for multi-domain access control in a pervasive environment. We place dual capabilities inside Identity-Based Encryption wrappers to force the authentication problems back inside each player's 'home' domain.

Security problems which arise from talking to the wrong strangers are usually addressed by attempting to ensure that we know to whom we are speaking. We argue that often it is preferable to know that we are talking to the correct stranger.

View original page

28 November 14:45Dynamics, robustness and fragility of trust / Dusko Pavlovic

Room FW11, Computer Laboratory, William Gates Building

I present a model of the process of trust building that suggests that trust is like money: the rich get richer. The proviso is that the cheaters do not wait for too long, on the average, with their deceit. The model explains the results of some recent empiric studies, pointing to a remarkable phenomenon of *adverse selection*: a greater percentage of unreliable or malicious web merchants are found among those with certain (most popular) types of trust certificates, then among those without. While some such findings can be attributed to a lack of diligence, and even to conflicts of interest in trust authorities, the model suggests that the public trust networks would remain attractive targets for spoofing even if trust authorities were perfectly diligent. If the time permits, I shall discuss some old and some new ways to decrease this vulnerability, and some problems for exploration.

View original page

27 November 14:00Economics of architectural change: resistance to distributed denial of service attacks / Mikko Särelä, NomadicLab, Ericsson

Lecture Theatre 1, Computer Laboratory, William Gates Building

The past years have seen many proposals for distributed denial of service resistant architecture in the Internet. Still, such technologies have, mostly, not been deployed and there are no functioning markets for resistance against such attacks. In this presentation, we study the Internet as a business network and draw lessons for technology design. The preliminary findings indicate that the deployment incentives arise from the edges, there is an asymmetry between incentives in the uphill path and downhill path, and finally that the technologies must provide reliable and enforceable way of filtering bad traffic.

View original page

21 November 16:00The robustness of CAPTCHAs / Jeff Yan, Newcastle University

Computer Laboratory, William Gates Building, Room FW11

No matter whether you like or hate it, CAPTCHA has found widespread application on numerous commercial web sites - it is now almost a standard security mechanism for defending against undesirable or malicious Internet bot programs.

This talk introduces our recent work on attacking numerous widely deployed CAPTCHAs. I will present new techniques of general value to attack a number of text CAPTCHAs, including the schemes designed and deployed by Microsoft, Yahoo and Google. In particular, the Microsoft CAPTCHA has been deployed since 2002 at many of their online services including Hotmail, MSN and Windows Live. Designed to be segmentation-resistant, this scheme has been studied and tuned by its designers over the years. However, our simple attack has achieved a segmentation success rate of higher than 90% against this scheme. It took on average ~80 ms for the attack to completely segment a challenge on an ordinary desktop computer. As a result, we estimate that this CAPTCHA could be instantly broken by a malicious bot with an overall (segmentation and then recognition) success rate of more than 60%. On the contrary, the design goal was that automated attacks should not achieve a success rate of
higher than 0.01%. For the first time, our work shows that CAPTCHAs that are carefully designed to be segmentation-resistant are vulnerable to novel but simple attacks.

Our experience suggests that CAPTCHA will go through the same process of evolutionary development as cryptography, digital watermarking and the like, with an iterative process in which successful attacks lead to the development of more robust systems.

View original pageView slides

11 November 16:15Is the Operating System the Right Place to Address Mobile Phone Security? / Craig Heath, Symbian

Lecture Theatre 2, Computer Laboratory, William Gates Building

* What we mean by a "secure" mobile phone
* What broad approaches are possible (or "who will trust whom to do what?")
* What measures can be taken by the operating system
* How effective those measures have been in practice
* Whether the "costs" of the security measures are fairly distributed
* How economic incentives can be adjusted for better advantage
* How operating system security can cooperate with other measures
* Followed by open discussion

View original page

04 November 16:15Improving Tor using a TCP-over-DTLS Tunnel / Joel Reardon, University of Waterloo

Lecture Theatre 2, Computer Laboratory, William Gates Building

The Tor network gives anonymity to Internet users by relaying their traffic through the world over a variety of routers. This incurs undesirable latency, and we explore where this latency occurs. Experiments discount transport latency and computational latency to determine there is a substantial component that is caused by
delay. We determine that congestion control is causing the delay.

Tor multiplexes multiple streams of data over a single TCP connection. This is not the proper use of TCP, and as such results in the improper application of
congestion control. We illustrate an example of this occurrence on a Tor node in the wild and also illustrate how packet dropping and reordering cause interference
between the multiplexed streams.

Our solution is to use a TCP-over-DTLS transport between routers, and give each stream of data its own TCP connection. We give our design for our proposal, and show
experiments evidence to illustrate that our proposal has in fact resolved the multiplexing issues discovered in our system performance analysis. The future work
gives a number of steps towards optimizing and improving our work.

View original page

29 October 14:15How to Protect your Data by Eliminating Trusted Storage Infrastructure / David Mazieres - Stanford University

Lecture Theatre 1, Computer Laboratory

Storage systems typically trust some amount of infrastructure to behave correctly--the network, a file server, a certificate authority.
Many interpret "protecting data" to mean building a security fence around this trusted infrastructure. Unfortunately, people frequently fail to build high enough fences. Moreover, even low fences inconvenience honest people by limiting the ways in which they can access, update, and manage data.

An alternative is to design systems that cope with compromised infrastructure. This talk will present a set of techniques that progressively chip away at the security requirements of ordinary network file systems--eliminating the need to trust the network, eliminating the need to rely on certificate authorities, eliminating the need to trust replicas of popular data, mitigating the effects of compromised clients and passwords.

Finally, I'll show how clients can detect attempts to tamper with data even after an attacker completely compromises the file server. All of these techniques have been realized in usable systems, demonstrating that practical, strong data security need not come at the cost of high fences and their associated management constraints.

View original page

21 October 16:15Browsing with the enemy: a German view / Kai Buchholz-Stepputtis and Boris Hemkemeier

Lecture Theatre 2, Computer Laboratory, William Gates Building

Abstract not available

View original pageView slides

05 September 16:00Electronic health records: which is worse, the UK system or the US System? / Deborah C. Peel, Patient Privacy Rights

Lecture Theatre 2, Computer Laboratory, William Gates Building

Dr. Deborah Peel will discuss the current threats to privacy posed by the use of electronic health records in America. She is convinced that the US health IT system is far worse than that of the UK. And America has nothing comparable to the EU system of data privacy commissioners to protect the public's human rights. She argues that the current Administration and Congress has enabled and frankly encouraged US industry and government to engage in widespread surveillance, theft, sale, and misuse of Americans' sensitive personal health data. In 2002, the U.S. Department of Health and Human Services eliminated the right of consent in the HIPAA Privacy Rule, turning it into an 'Anti-Privacy Rule'. The result was to eliminate Americans' rights to control the use and disclosure of personal health information in electronic systems. Secondary uses without consent are now the primary uses of health data in the US.

Today, Americans have no way of knowing how many secret databases across the world store and use their health records. Both industry and the government lust after total access to the nation's treasure troves of health data. Numerous industries exploit the extreme commercial value of richly detailed health data. For example, one data miner, listed on the NYSE, reported revenues of $2 billion dollars in 2006. The seriously flawed US health IT system has spurred technology innovators to restore privacy rights by building trustworthy systems and products controlled by patients. The route to progress and the widespread adoption of health IT is through privacy. But consumers can't tell which systems and products to trust.

A new consumer-led privacy certification organization, Patient Privacy Certified, will audit health IT systems and products for adherence to the toughest privacy standards in the world. Certified products will be awarded a seal so consumers can tell they offer ironclad secure and privacy for health records.

Speaker:

Deborah C. Peel, MD, founded Patient Privacy Rights in 2004 "www.patientprivacyrights.org":http://www.patientprivacyrights.org to guarantee that Americans control all access to their personal health information. Patient Privacy Rights is America's leading consumer advocacy organization working to restore patients' rights to health information privacy.

In 2006, Dr. Peel formed the bipartisan Coalition for Patient Privacy. Coalition members include the Family Research Council, the Christian Coalition, the Electronic Privacy Information Center, the ACLU, the California Medical Association, and the American Chiropractic Association - over 50 organizations representing 7 million people.

In 2007, the world's largest technology corporation, Microsoft, joined the Coalition and agreed to adhere to the Coalition's privacy principles. Also in 2007, Dr. Peel was voted #4 of Modern Healthcare's 100 Most Powerful in Healthcare.

In 2008, PPR launched PrivacyRightsCertified, a consumer-led organization to certify electronic systems and software that meet the toughest national and international standards for privacy. This enables the public to tell which electronic health systems and products ensure that personal health information is secure and all access is controlled by the patient. Microsoft's HealthVault will be the first platform audited.

View original page

24 June 16:15 Advances in Hash Cryptanalysis / Christian Rechberger, IAIK, Graz University of Technology

Lecture Theatre 2, Computer Laboratory, William Gates Building

Hash functions are the Swiss army knife for cryptographers. Password protection, digital signatures (also in a potential post-quantum period) are applications where they surface outside the cryptographic community. Not only are almost all popular hash functions based on the same design principle, it also turned out that designers were not conservative enough. Spectacular practical attacks (e.g. on MD5) were the result in recent years, and
standardization organisations look for replacements.

The ubiquitously used SHA-1 exhibits a higher resistance against shortcut collision search attacks. Still, to motivate the shift _away from SHA-1_, we found a new shortcut attack which is estimated to be around a million times faster than generic attacks. The workfactor is still very high and hence we started a distributed computing project to find the first SHA-1 collision:
"SHA-1 Collision Search Graz":http://boinc.iaik.tugraz.at

Many applications of hash functions do not require collision resistance but rely on properties that are generally assumed to be much harder to violate (like resistance against inversion attacks). Nevertheless, some of our very recent results indicate that also here, we might see a development similar to collision attacks.

View original page

17 June 16:15"Fourteen Thousand Messages" / John Levine

Lecture Theatre 2, Computer Laboratory, William Gates Building

A guy I know went away on a trip for a month and a half. When he got back, his inbox had 14,000 messages waiting for him, real ones, since his mail system has pretty good spam filtering. How can anyone deal with that much mail? More importantly, there are tools to sort, filter, combine, and so forth to get the mail under control, but how can people who aren't techno-weenies like me manage and use the tools we have? Or do we need different tools?

View original page

20 May 14:30Privacy-preserving datagram delivery for ubiquitous systems / David Evans (Computer Laboratory)

Room FW11, Computer Laboratory, William Gates Building

*Slides "available":http://www.cl.cam.ac.uk/research/srg/opera/meetings/attachments/privacy-preserving_evans_2008_05_20.pdf .*

*Abstract:*

This talk describes one method of achieving communication privacy for ubiquitous systems and presents some preliminary performance results.
More specifically, we (i) describe the difference between data privacy and communication privacy and outline why both are important in ubiquitous computing; (ii) describe how to modify Tor, an anonymous communication framework, to provide anonymous datagram communication suitable for use in ubiquitous systems; and (iii) test and evaluate the performance of our proposal with reference to an example citywide sensor
network. We find that the system offers ubiquitous applications a low latency communication channel with reasonable privacy properties and that one pays surprisingly little for the benefits of the Tor infrastructure.

*Bio:*

David Evans is a Research Associate attached to the TIME-EACM project, examining issues of security and privacy in transport monitoring middleware and applications.

He holds a PhD from the University of Waterloo in Waterloo, Ontario, Canada, where he explored resource management strategies for the delivery of rapidly changing, frequently requested information.

He has also worked on software infrastructures for unobtrusive monitoring of frail individuals, and with the IBM Centre for Advanced Studies on web system scalability and data centre resource provisioning.

His masters research covered digital rights management.

His research interests include performance modelling and analysis of distributed and operating systems, privacy and trust, and novel applications for low-overhead virtualisation.

You can see more of "Dave's research":http://www.cl.cam.ac.uk/~de239/ .

View original page

30 April 14:15Copyright vs Community / Richard Stallman, www.gnu.org

Lecture Theatre 1, Computer Laboratory

Copyright developed in the age of the printing press, and was designed to fit with the system of centralized copying imposed by the printing press. But the copyright system does not fit well with computer networks, and only draconian punishments can enforce it.

The global corporations that profit from copyright are lobbying for draconian punishments, and to increase their copyright powers, while suppressing public access to technology. But if we seriously hope to serve the only legitimate purpose of copyright--to promote progress, for the benefit of the public--then we must make changes in the other direction.

Brief bio:

Richard Stallman launched the development of the GNU operating system
(see www.gnu.org) in 1984. GNU is free software: everyone has the
freedom to copy it and redistribute it, as well as to make changes
either large or small. The GNU/Linux system, basically the GNU
operating system with Linux added, is used on tens of millions of
computers today. Stallman has received the ACM Grace Hopper Award, a
MacArthur Foundation fellowship, the Electronic Frontier Foundation's
Pioneer award, and the the Takeda Award for Social/Economic
Betterment, as well as several honorary doctorates.

View original page

29 April 16:15"From the Casebooks of..." / Mark Seiden

Lecture Theatre 2, Computer Laboratory, William Gates Building

In a field with few design principles ("defense in depth"? separate duties?), few rules of thumb, no laws named after people more influential than Murphy, no Plancks or Avogadros to hold Constant, and little quantitation of any sort (we count bad things and how long it takes to fix them), it appears the best we can do right now is telling stories.

Over (enough) beer we cons up lightly anonymized War Stories about late night phone calls, scary devices, hard to find bugs (which exploiters somehow found), the backups that didn't, stupid criminals, craven prosecutors, cute hacks (but "don't try this at home") and pointy-haired bosses... There will be a few of these in this talk, but also some Cautionary Tales, parables, isomorphs of the Old Stories demonstrating human frailty and that the Law of Unexpected Consequences operates most strongly near the intersection of Bleeding Edge and Slippery Slope. Also just a bit about the future.

Mark Seiden, a programmer since the '60s, has worked since 1983 in areas of security, network, and software engineering for companies world-wide. As a Yahoo Paranoid and as a consultant, recent projects have included design, architecture, and implementation for ebusiness systems, security for online financial transaction processing and for a distributed document processing system, as an expert in computer crime cases, and testing of network, procedural and physical security in diverse deployed systems, enterprises, and colocation facilities.

Time Digital named him one of the 50 "CyberElite" in their first annual list, and he's been involved with four National Academy of Sciences studies on some trippy subjects. Mark was the first registant of the domain food.com. He's been played by an actor in a rather bad movie. His Erdos number is 4.

View original page

23 April 16:15Fighting online crime / Mikko Hyppönen, Chief Research Officer, F-Secure Corporation

Lecture Theatre 2, Computer Laboratory, William Gates Building

*Fighting Online Crime*

This talk will cover how commercial antivirus labs operate today; What kind of systems are in use, how samples are collected and how they are analysed.

There will be also discussion about the changing enemy and about the current criminal trends and their origins.

Topics:
* Daily operation of a modern antivirus lab
* Changing enemy
* Espionage trojans
* Mobile malware

Mikko Hypponen has worked with computer viruses since 1991. He is an inventor of several patents and has written for magazines such as Scientific American, Foreign Policy and Virus Bulletin. Mr. Hypponen works for F-Secure Corporation in Helsinki, Finland.

View original pageView slides

15 April 16:15Process isolation for cloud computing using commodity operating systems / Wenbo Mao, Director and Chief Engineer, EMC Research China

Lecture Theatre 2, Computer Laboratory, William Gates Building

In new ways of computing, such as Grid and Cloud computing, the computing environment is in a multi-tenancy and virtual organization setting for which conformed guest process isolation is an important quality of service. Some known approaches suggested to make use of natural isolation existed between virtual machines (VMs) by deploying processes of different guests into separate VMs. We argue that, under a reasonable assumption of using commodity OSes, process isolation using inter-VM isolation is not only inadequate in security, but also impractical in performance and several other considerations. In Project Daoli, we work on process isolation within a VM. Our method modifies the open source hypervisor Xen by adding process isolation components to Xen with conformed behavior.

Daoli is a project on trusted grid infrastructure led by EMC Research China working with Fudan University, Wuhan University and Huazhong University of Science and Technology in China

View original pageView slides/notes

08 April 16:15An Empirical Analysis of Phishing Attack and Defense / Tyler Moore (Computer Laboratory, University of Cambridge)

Lecture Theatre 2, Computer Laboratory, William Gates Building

A key way in which banks mitigate the effects of phishing attacks is to remove the fraudulent websites and abusive domain names hosting them. We have gathered and analyzed empirical data on phishing website removal times and the number of visitors that the websites attract. We find that website removal is part of the answer to phishing, but it is not fast enough to completely mitigate the problem. Phishing-website lifetimes follow a long-tailed lognormal distribution -- while many sites are removed quickly, others remain much longer. We have found evidence that one group responsible for half of all phishing, the rock-phish gang, cooperates by pooling
hosting resources and by targeting many banks simultaneously. The gang's architectural innovations have significantly extended their websites' average lifetime. Using response data obtained from the servers hosting phishing websites, we also provide a ballpark estimate of the total losses due to phishing.

Phishing-website removal is often subcontracted to specialist companies. We analyze three months of `feeds' of phishing website URLs from multiple sources, including two such companies. We demonstrate that in each case huge numbers of websites may be known to others, but the company with the take-down contract remains unaware, or learns of sites only belatedly. Upon calculating the resultant increase in lifetimes caused by the take-down company's lack of action, the results categorically demonstrate that significant amounts of money are being put at risk by the failure to share proprietary feeds of URLs.

Finally, we have studied how one anti-phishing organization has leveraged the so-called `wisdom of crowds' by relying on volunteers to submit and verify suspected phishing sites. We show its voting-based decision mechanism to be slower and less comprehensive than unilateral verification performed by companies. We also find that the distribution of user participation is highly skewed, leaving the scheme vulnerable to manipulation.

View original pageView slides/notes

25 March 16:15Minimal TCB Code Execution / Jonathan M. McCune, Carnegie Mellon University

Lecture Theatre 2, Computer Laboratory, William Gates Building

We present Flicker, an architecture that allows code to execute in
complete isolation from other software while trusting only a tiny
software base that is orders of magnitude smaller than even minimalist
virtual machine monitors. Flicker can also provide
fine-grained attestation of the code executed (as well as its inputs
and outputs) to a remote party. Our technique enables more
meaningful attestation than previous proposals, since only
measurements of the security-sensitive portions of an application need
to be included. We achieve these guarantees by leveraging hardware
support provided by commodity processors from AMD and Intel that are
shipping today, and without requiring a new operating system.

View original page

29 February 16:00MD5crypt and GBDE: observations of a non-union cryptographer / Poul-Henning Kamp

Computer Laboratory, William Gates Building, Room FW11

Cryptographers are great guys and smart people, but why don't they ever produce code that solves the problems we have, and why do the whine when we do ?

MD5crypt, probably the worlds most widely used protection of passwords, was thrown together by a non-cryptographer in an afternoon, why did he have to ? (and why isn't he too proud of it ?)

GBDE, an encrypted disk facility, took considerably more work in the second step of the Feynmann algorithm, and a solid beating from the cryptographers card-carrying union members, but did anybody learn anything and if so, what ?

View original page

22 February 16:00Is SSL provably secure ? / Nigel Smart, Department of Computer Science, University of Bristol

Lecture Theatre 2, Computer Laboratory, William Gates Building

In this talk I will describe some joint work with P. Morrissey and B. Warinschi on the SSL protocol. We attempt to show that an abstraction of the SSL protocol does provide a secure key agreement protocol, and we quantify exactly what properties are required of any subprotocol which produces the pre-master secret.

View original page

13 February 16:15Hardware defences against side channel and invasive attacks / Philip Paul, Computer Lab, University of Cambridge

SS03, Computer Laboratory, William Gates Building

Low cost hardware security devices are increasingly deployed but are vulnerable to a number of attacks. We will demonstrate power analysis counter measures to make non-invasive attacks much more difficult, and ink jet coating techniques to defend against invasive attacks.

View original pageView slides/notes

12 February 16:15Hot or Not: Fingerprinting hosts through clock skew / Steven J. Murdoch (Computer Laboratory, University of Cambridge)

Lecture Theatre 2, Computer Laboratory, William Gates Building

Every computer has a unique clock skew, even ones of the same model, so this acts as a fingerprint. Even if that computer moves location and changes ISP it can be later identified through this phenomenon.

By collecting TCP timestamps or sequence numbers, clock skew can be accurately remotely measured. In addition to varying between computers, clock skew also changes depending on temperature. Thus a remote attacker, monitoring timestamps, can make an estimate of a computer's environment, which has wide-scale implications on security and privacy.
Through measuring day length and time-zone, the location of a computer could be estimated, which is a particular concern with anonymity networks and VPNs. Local temperature changes caused by air-conditioning or movements of people can identify whether two machines are in the same location, or even are virtual machines on one server.
The temperature of a computer can also be influenced by CPU load, so opening up a low-bandwidth covert channel. This could be used by processes which are prohibited from communicating for confidentiality reasons and because this is a physical covert channel, it can even cross "air-gap" security boundaries.

The talk will demonstrate how to use this channel to attack the hidden service feature offered by the Tor anonymity system.
Here, an attacker can repeatedly access a hidden service, increasing CPU load and inducing a temperature change. This will affect clock skew, which the attacker can monitor on all candidate Tor servers. When there is a match between the load pattern and the clock skew, the attacker has linked the real IP address of a hidden server to its pseudonym, violating the anonymity properties Tor is designed to provide.

The talk will also present a separate illustration of the temperature covert channel technique, such as investigating a suspected attack on the Tor network in August 2006, by a well equipped adversary.

View original page

06 February 14:15Defence against the Dark Arts / Mike Prettejohn, Netcraft

Lecture Theatre 1, Computer Laboratory

http://news.netcraft.com/

Phishing, hacking and Internet based fraud are growing very quickly,
not only in absolute numbers, but also in diversity and complexity.
In this talk, we review contemporary Internet bank robbing,
illustrating the scale of the activity, the technological arms race
between attackers and defenders, and review some recent innovations and
illuminating mistakes from each side.

View original page

29 January 10:30Exploiting Online Games / Gary McGraw, CTO, Cigital

Room FW11, Computer Laboratory, William Gates Building

The talk, based on a book of the same title (co-authored by Greg Hoglund), exposes the inner workings of online game security for all to see, drawing illustrations from MMORPGs such as World of Warcraft to discuss:

* Why online games are a harbinger of software security issues to come
* How millions of gamers have created billion dollar virtual economies
* How game companies invade your privacy
* Why some gamers cheat
* Techniques for breaking online game security
* How to build a bot to play a game for you
* Methods for total conversion and advanced mods

But ultimately this talk is about security problems associated with advanced massively distributed software. With hundreds of thousands of interacting users, today's online games are a bellwether of modern software yet to come. The kinds of attack and defense techniques I describe are tomorrow's security techniques on display today.

BIO
Gary McGraw, Ph.D.
CTO, Cigital

"Company":http://www.cigital.com
"Podcast":http://www.cigital.com/silverbullet
"Blog":http://www.cigital.com/justiceleague
"Book":http://www.swsec.com
"Personal":http://www.cigital.com/~gem

Gary McGraw is the CTO of Cigital, Inc., a software security and quality consulting firm with headquarters in the Washington, D.C. area. He is a globally recognized authority on software security and the author of six best selling books on this topic. The latest, Exploiting Online Games was released in 2007. His other titles include Java Security, Building Secure Software, Exploiting Software, and Software Security; and he is editor of the Addison-Wesley Software Security series. Dr. McGraw has also written over 90 peer-reviewed scientific publications, authors a monthly security column for darkreading.com, and is frequently quoted in the press. Besides serving as a strategic counselor for top business and IT executives, Gary is on the Advisory Boards of Fortify Software and Raven White. His dual PhD is in Cognitive Science and Computer Science from Indiana University where he serves on the Dean's Advisory Council for the School of Informatics. Gary is an IEEE Computer Society Board of Governors member and produces the monthly Silver Bullet Security Podcast for IEEE Security & Privacy magazine.

View original page

16 January 14:15Searching for Evil / Ross Anderson and Richard Clayton, Computer Laboratory, University of Cambridge

Lecture Theatre 1, Computer Laboratory

(work with Tyler Moore and Shishir Nagaraja)

Computer security has recently imported a lot of ideas from economics,
psychology and sociology, leading to fresh insights and new tools. We
will describe one thread of research that draws together techniques
from fields as diverse as signals intelligence and sociology to search
for artificial communities.

Evildoers online divide roughly into two categories - those who don't
want their websites to be found, such as phishermen, and those who do.
The latter category runs from fake escrow sites through dodgy stores
to postmodern Ponzi schemes. A few of them buy ads, but many set up
fake communities in the hope of having victims driven to their sites
for free. How can these reputation thieves be detected?

Some of our work in security economics and social networking may give
an insight into the practical effects of network topology. These tie
up in various ways with traffic analysis, long used by the signals
intelligence agencies which trawl the airwaves and networks looking
for interesting targets. We'll describe a number of dubious business
enterprises we've unearthed. Recent advances in algorithms, such as
Newman's modularity matrix, have increased the robustness of covert
community detection. But much scope remains for wrongdoers to hide
themselves better as they become topologically aware; we can expect
attack and defence to go through several rounds of coevolution. We'll
therefore end up by talking about some strategic issues, such as the
extent to which search engines and other service providers could, or
should, share information in the interests of wickedness detection.

(This talk was given as a google tech talk in August 2007 and is "here":http://video.google.com/videoplay?docid=-1380463341028815296)

2007

View original pageView slides/notes

18 December 16:15Towards interactive belief, knowledge, and provability: possible application to zero-knowledge proofs / Simon Kramer, École Polytechnique Paris

Lecture Theatre 2, Computer Laboratory, William Gates Building

We argue that modal operators of interactive belief, knowledge, and provability are definable as natural generalisations of their non-interactive counterparts, and that zero-knowledge proofs (from cryptography) have a natural (modal) formulation in terms of interactive individual knowledge, non-interactive propositional knowledge and interactive provability. Our work is motivated by van Benthem's investigation into rational agency and dialogue and our attempt to redefine modern cryptography in terms of modal logic.

This ongoing work builds on Chapter 5 of my thesis Logical Concepts in Cryptography http://library.epfl.ch/en/theses/?nr=3845

View original pageView slides/notes

11 December 16:15Tracking the Russian Business Network (RBN) / Jart Armin

Lecture Theatre 2, Computer Laboratory, William Gates Building

This talk concerns the methodology, techniques and experiences of tracking and ultimately exposing the workings of the Russian Business Network (RBN).

According to many within the Internet Security arena RBN is a Russian Internet Service Provider based in St. Petersburg which is notorious for its hosting of illegal and dubious businesses, including; child pornography, phishing, spam bot operation, and malware distribution sites. Despite more recent public awareness the RBN is still an excellent example of a covert criminal community with no known leadership or physical locations.

The author in collaboration with several other researchers approached the task from an alternative perspective. This was to study the RBN from an organizational and business perspective, investigate and uncover its nodes of operation, its necessary interaction with its victims and clients. The question now remains; knowing and exposing RBN is a useful objective but surely the main goal is to stop them?

View original page

07 December 16:00Graphical passwords: some recent results / Jeff Yan, University of Newcastle

Lecture Theatre 2, Computer Laboratory, William Gates Building

Cognitive and psychological studies have revealed that humans perform far better at remembering pictures than words. This has inspired fast growing research into the design of graphical password systems in both the security and the HCI communities, in the expectation of delivering a graphical alternative to the ubiquitous textual password scheme (which has long suffered from usability problems). However, much work must be done to realise the benefits of the picture superiority effect, and make graphical passwords a usable and robust security solution. In this talk, I will present our recent work on designing graphical passwords that are both secure and usable for hand-held devices.

View original pageView slides/notes

04 December 16:15The Anti-Bank: the privatized delivery of social grants using biometric encrypted smart-cards in southern Africa / Keith Breckenridge, Professor of History and Internet Studies, University of KwaZulu-Natal, South Africa

Lecture Theatre 2, Computer Laboratory, William Gates Building

The South African company Net1 / Aplitec has filled the space left by the collapse of the HANIS smart card. Currently some 13 million people receive monthly grants using biometrically authenticated smart-cards. Aplitec have also built up a system of point-of-sale, microlending and insurance products that deliberately challenge the EMV system. The Aplitec encryption system uses a biometric key to encrypt card data -- it is strictly proprietary, and deliberately incompatible with the banking infrastructure. (Serge Belamont, the spirit behind Aplitec, designed the SASWITCH interbank switch in the early 1980s). The company is listed on the NASDAQ, with a current market capitalization of about R10 billion. All of its revenues are taken from the welfare system.

View original pageView slides/notes

27 November 16:15Networked information processing and privacy in Japan / Andrew A. Adams, School of Systems Engineering, University of Reading

Lecture Theatre 2, Computer Laboratory, William Gates Building

Dr Andrew A. Adams has just spent nine months visiting Meiji University in Tokyo, funded by a Global Research Award from the Royal Academy of Engineering. He has been studying the legal and social approach to privacy of electronic data in Japan and will present some of the results of his study.

There is a myth amongst researchers that there is no such thing as "Privacy" in Japan. Dr Adams refutes that and shows that the advent of networked information processing of personal data has brought Japanese attitudes to information privacy to a highly similar position to Western attitudes.

Grounded in the social and psychological literature about Japan, this work explains the emergence of Japanese legal protection for personal data in
recent years.

View original pageView slides/notes

13 November 16:15Authentication protocols based on human interaction in security pervasive computing / Nguyen Hoang Long, Oxford University Computing Laboratory

Lecture Theatre 2, Computer Laboratory, William Gates Building

A big challenge in pervasive computing is to establish secure communication over the Dolev-Yao network without a PKI. An approach studied by researchers is to build security though human work creating a low-bandwidth empirical channel (physical contact, human conversation) where the transmitted information is
authentic and cannot be faked /modified. In this talk, we give a brief survey of authentication protocols of this type as well as concentrating on our contribution which is group-protocol.

We start with non-interactive schemes, for example: the one proposed by Gehrmann, Mitchell and Nyberg, and point out that it does not optimise the human work, and then present our improved version of the scheme. We then move on to analyse strategies used to build interactive pair-wise and group protocols that minimise the human work relative to the amount of security
obtained. Many of the protocols are based on the human comparison of a single short string.

Speaker's website:
http://web.comlab.ox.ac.uk/oucl/work/long.nguyen/

View original page

30 October 16:15Key amplification in unstructured networks / Shishir Nagaraja, Computer Laboratory, University of Cambridge

Lecture Theatre 2, Computer Laboratory, William Gates Building

There are a number of scenarios where users wishing to communicate, share a weak secret. Often, they are also part of a common social network. Connections (edges) from the social network are represented as shared link keys between participants (vertices). We propose several mechanisms that utilise the graph topology of such a network, to increase the entropy of weak pre-shared secrets. Our proposals are based on using random walks to efficiently identify a chain of common acquaintances between Alice and Bob, each of which contribute entropy to the final key. Our mechanisms exploit one-wayness and convergence properties of Markovian random walks to, firstly, maximize the set of potential entropy contributors, and second, to resist any contribution from dubious sources by exploiting the community information characteristically present in real world network topologies.

View original page

16 October 16:15Synergy of crime science and security engineering / Shaun Whitehead

Lecture Theatre 2, Computer Laboratory, William Gates Building

This talk discusses the links between crime science and security engineering, drawing on experience of research into designing out mobile phone theft and current investigations into the theft and misuse of electronic services.

View original page

02 October 16:15High security locks: illusion or reality / Marc Weber Tobias

Lecture Theatre 2, Computer Laboratory, William Gates Building

A case study in compromising the most popular high security lock in America: The Medeco m3.

In the United States two standards organizations rate cylinders for their ability to withstand forced and covert attack and certify these locks as suitable for high security installations. Yet are the standards actually what they represent and are consumers really secure if they rely upon them especially if high value commercial or government targets are involved?


Many high security lock manufacturers claim that their cylinders will be impervious to covert methods of entry including picking and bumping and that they offer high levels of key control, effectively preventing the illegal or unauthorized duplication of their keys.

In this presentation, Marc Weber Tobias offers a detailed analysis of how the Medeco lock; of one of the most respected manufacturers in the United States and Europe was compromised by his research team. These cylinders are utilized to protect the most secure areas of commerce and government, not only in America but also in many other countries. This is a serious case in which there has been a basic failure of imagination on the part of design engineers to properly assess the security of the locks that they produce. This has resulted in the exposure of facilities to serious potential vulnerabilities.

Bio: Marc Weber Tobias is an investigative attorney and a physical security expert in locks and safes. He was trained as both a lawyer and criminal investigator and has been a certified polygraph examiner for the past twenty years, employed by government agencies and private clients. He works in the United States and has conducted thousands of polygraph or lie detector examinations in both criminal and civil investigations involving cases of kidnapping and murder to employee theft from commercial businesses. The polygraph is utilized throughout the world by police and intelligence agencies for a variety of purposes, including the verification of statements by suspects and victims, plea bargains in criminal cases, and vetting of government employees and intelligence agents to obtain and maintain security clearances. Marc Tobias has worked several high-profile cases and in one investigation, he conducted the polygraph examination of the career criminal in Sweden that provided the gun that killed the prime minister of that country in 1986.

View original page

30 July 16:15The economics of revealing and protecting private information: Evidence from human subject experiments and surveys / Jens Grossklags, School of Information, University of California Berkeley

Lecture Theatre 2, Computer Laboratory, William Gates Building

Privacy and security decision-making depends not only on technological, but also economic, behavioral, and legal factors. The resulting privacy choices by individuals often appear puzzling and contradictory in comparison to results from opinion surveys indicating high concern for the sanctity of private information. In this talk I will discuss results from two studies
that shed light on the underlying drivers of these observations.

First, I will report on a study of software installations assessing the effectiveness of different notices for helping people to make better decisions on which software to install. Our study of 222 users showed that providing a short summary notice, in addition to the End User License Agreement (EULA), reduced the number of potentially harmful software
installations significantly. However, even with the introduction of short and conspicuous notices, as recommended by consumer interest groups and government agencies, many users installed programs and later expressed regret for doing so.

Second, I will present experimental results that support the assumption that protecting information is not only based on different marketplace activities
than giving away information but that there is a significant gap between consumers' valuation for protecting and giving up privacy. These results
have implications for the accurate measurement of privacy losses in legal proceedings, and should be taken into consideration when evaluating the
desirability of consumer protection regulation.

Speaker's homepage:

http://www.ischool.berkeley.edu/~jensg/

View original page

30 May 14:15Smart-card based authentication on an insecure network / Peter Sweeney, Centre for Communication Systems Research, University of Surrey.

Lecture Theatre 1, Computer Laboratory

Standard means of authentication use PINs over secure terminals or secure networks. However there are many applications where proper authentication would be valuable, but the user may be connected to an insecure network, particularly the internet. In such circumstances, use of a PIN is inappropriate because of the ease of eavesdropping.

The work reported arose from an FP5 project to create a new 32-bit USB smart card and associated applets. The requirements are discussed and an image-based authentication method is described. Experimental work showed that the method was usable, but it has the potential disadvantage that no proof exists for its security. Moreover, it requires connection to an online database of images.

As an alternative, a method of provable security is put forward, which is potentially very suitable for implementation on a smart card. However the usability of the method is in question. There is also a potential active attack against this method, even though no strategy for the attack has yet been designed.

Speaker:
Peter Sweeney is a Reader in the Centre for Communication Systems Research at the University of Surrey. His main interests have always been in error-control coding, but as a side line he has also pursued research in other aspects of information theory, particularly cryptology and steganography.

View original page

15 May 16:15Realities of online banking fraud / Matthew Pemble, Vizuri

Lecture Theatre 2, Computer Laboratory, William Gates Building

The UK banking industry's response to online fraud is regularly criticised by both journalists and by more informed commentators. The seminar will look at the economic and practical realities of current fraud issues - phishing and its variants, advanced fee, internationalisation of fraud and (the general lack of) law enforcement response. Technical and procedural measures to improve matters will be discussed, including the complexities of the proper use of "strong authentication" technologies.

Slides can be found <a href="http://www.cl.cam.ac.uk/research/security/seminars/2007/2007-5-15_pemble.pdf">here</a>.

View original page

15 May 14:30Alertme.com - implementing wireless home security 2.0 / Amyas Phillips, Alertme.com

Room FW11, Computer Laboratory, William Gates Building

Alertme.com is a Cambridge based startup developing home security as an internet appliance. ZigBee based wireless sensors and actuators communicate with an internet-connected gateway, through which remote
servers provide services to customers' homes. The service is configured via a web based UI, with a simple in-home interface for daily use. We will present a brief overview of the system architecture followed by a selection of technical challenges and our solutions, on which we would welcome criticisms and suggestions. Depending on our audience's interests, these could include deployment, coexistence and energy considerations of wireless sensor networks, the UI, digital security, distributed system design, networking in the home, manufacturing, and _ad-hoc_ any other topics of interest.

View original page

08 May 16:15Towards open trusted computing frameworks / Matt Barrett

Lecture Theatre 2, Computer Laboratory, William Gates Building

This talk will summarise the results of, and motivation for, my master's thesis, which looked at the feasibility of a trusted computing framework built from entirely open components. Each component was required to be as inspect-able and verifiable as possible, and therefore be trusted by its users.

I will discuss in some detail a novel insertion attack against certain trusted computing frameworks built upon the Trusted Computing Group's Trusted Computing Module. Our insertion attack makes use of a vulnerability that arises due to the architecture of the TPM itself, and was published at COMPSAC 2006.

Bio:

Matt Barrett graduated from the University of Auckland's Computer Science Department (http://www.cs.auckland.ac.nz) with a MSc (Hons, 1st Class) in 2005. His thesis was titled 'Towards an Open Trusted Computing Framework,' available at http://www.cs.auckland.ac.nz/~cthombor/Students/mbarrett/mbarrettThesis.htm. Since then, he has been living and working in London. Previous research has included Microsoft's now defunct Next-Generation Secure Computing Base.

View original page

01 May 16:15The commercial malware industry / Peter Gutmann, University of Auckland

Lecture Theatre 2, Computer Laboratory, William Gates Building

Malware has come a long way since it consisted mostly of small-scale (if prolific) nuisances perpetrated by script kiddies. Today, it's increasingly being created by professional programmers and managed by international criminal organisations. This talk looks at the methods and technology employed by the professional malware industry, which is turning out "product" that matches (and in some cases even exceeds) the sophistication of standard commercial software, but with far more sinister applications.

Peter Gutmann's webpage:
http://www.cs.auckland.ac.nz/~pgut001/

View original page

01 May 14:15Phishing tips and techniques: tackle, rigging, and how and when to phish / Peter Gutmann, University of Auckland

Lecture Theatre 2, Computer Laboratory, William Gates Building

Despite the crypto wars having mostly ended some years ago, we don't seem to be any better off now that good crypto is widely available. The reason for this is that attackers are exploiting the weakest link in the interface and doing an end-run around the crypto. This talk looks at the technical and psychological backgrounds behind why phishing works, and how this can be exploited to make phishing attacks more effective. To date, apart from the occasional use of psychology grads by 419 scammers, no-one has really looked at the wetware mechanisms that make phishing successful. Security technology doesn't help here, with poorly-designed user interfaces playing right into the phishers hands.

After covering the psychological nuts and bolts of how users think and make decisions, the talk goes into specific examples of user behaviour clashing with security user interface design, and how this could be exploited by attackers to bypass security speedbumps that might be triggered by phishing attacks. Depending on your point of view, this is either a somewhat hair-raising cookbook for more effective phishing techniques, or a warning about how these types of attacks work and what needs to be defended against.

Peter Gutmann's webpage:
http://www.cs.auckland.ac.nz/~pgut001/

View original page

06 March 16:15Alternative security mechanisms for WiFi networks / Daniel Cvrcek, Computer Laboratory, University of Cambridge

Lecture Theatre 2, Computer Laboratory, William Gates Building

Wireless networks (WiFi) constitute a cheap option for accessing Internet in some countries. Such networks are called 'community WiFi networks' as a group of people must establish a form of cooperation allowing them to agree on the terms of usage and to collect money for paying Internet connection fees. Such environment introduces strong threat of insider attacks. This is one of the reasons why any security mechanisms based on shared key are hard to effectively deploy. The talk will introduce approach based on reputation systems - analysing properties of network clients in relation to WiFi access points. We have implemented the system and deployed it in a community network with some 200 members in a very basic form so we give some results of the deployment. We will also overview our current directions for improvements of the system.

View original page

27 February 16:15Power analysis attacks / Elisabeth Oswald, Department of Computer Science, University of Bristol

Lecture Theatre 2, Computer Laboratory, William Gates Building

Power analysis attacks allow extracting keys from cryptographic devices with low effort. While so called differential power analysis attacks assume only very limited knowledge about the device under attack,
template-based power analysis attacks assume much more knowledge. Naturally, this leads to better attacks. This talk will survey existing power analysis techniques briefly, but will have its emphasis on template-based power analysis attacks.

View original page

13 February 16:15Anonymity in the wild: Mixes on unstructured networks / Shishir Nagaraja, Computer Laboratory, University of Cambridge

Lecture Theatre 2, Computer Laboratory, William Gates Building

With the growth in decentralised systems, unstructured networks including social networks are natural candidates for mix network topologies that are resilient against a well funded adversary who blocks access to a centralised mix-network. We consider mix topologies where mixes are placed on the nodes of a social network. We analyse the anonymity such networks provide under high latency conditions, and compare it with other sparsely connected mix networks. We prove that real network topologies such as scale-free networks mix efficiently. We also analyse mix topologies from the Klienberg small world and scale-free random graphs, using simulations and compare their performance with expander graphs. We also show that mix networks over unstructured topologies are resilient to vertex-order attacks of Barabasi-Albert, however batch sizes required for preventing intersection attacks could be a challenging requirement to meet.

Shishir Nagaraja's webpage can be found <a href='http://www.cl.cam.ac.uk/~sn275'>here</a>

View original page

06 February 16:15Data sharing and privacy in multi-agency working / Adam Warren, Department of Information Science, Loughborough University

Lecture Theatre 2, Computer Laboratory, William Gates Building

This paper analyses empirical data from a major, ESRC-funded research project concerning data-sharing and privacy in multi-agency working. The study provides the first systematic evidence about the ways in which local partnerships working in sensitive policy fields – including Mental Health and Crime and Disorder - attempt to strike settlements between sharing and confidentiality, Over 200 interviews were conducted in 77 organisations, covering four policy sectors, across England and Scotland. The analysis was framed by theory developed from the neo-Durkheimian tradition, and the research demonstrates that this theory has the power to identify and explain patterns of information sharing styles adopted in local collaborative working.

The overall conclusion is that the stronger formal regulation by national government may well be leading to the greater prominence of hierarchical institutional forms. However, the findings demonstrate that reliance on such policy tools does not always lead to consistent and acceptable outcomes, not least because of unresolved conflicts of values and aims.

The project, Joined-up Public Services: Data-sharing and Privacy in Multi-Agency Working, was a £230,000 ESRC-funded study concerning the tensions between collaborative working and respect for confidentiality in the spheres of health and criminal justice. It was co-managed by Professors Chris Bellamy (Nottingham Trent University), Perri 6 (Nottingham Trent University) and Charles Raab (Edinburgh University). It has produced a number of outputs, including conference papers, book chapters and journal articles. A co-authored book, Partnership and privacy in the information state, will be published by Palgrave-MacMillan in 2007.

Dr Adam Warren has been a Research Officer at the Department of Information Science (DIS), Loughborough University since September 2005 He completed his PhD thesis Fully Compliant? A Study of Data Protection Policy in Public Organisations at DIS in June 2003. He was subsequently employed for two years as a Research Fellow on the Data sharing and privacy project.

Dr. Adam Warren's <a href='http://www.lboro.ac.uk/departments/dis/people/awarren.html'>homepage</a>

View original page

30 January 16:15A reciprocation-based economy for multiple services in P2P grids / Miranda Mowbray, HP Labs Bristol UK

Lecture Theatre 2, Computer Laboratory, William Gates Building

Designers of peer-to-peer grids aim to construct computational grids encompassing thousands of sites. To achieve this scale, the systems cannot rely on trust or off-line negotiations among participants. Moreover, without incentives for donation, there is a danger that free riding will prevail, leading the grid to collapse. Reciprocation-based incentive mechanisms have been proposed to deal with this problem. However, they have only been studied for the case in which a single service - processing power - is shared. In this paper we give a reciprocation-based mechanism for the case when multiple services, such as processing power and data transfers, are shared. In simulations of scenarios in which the services shared are combinations of two different basic services, the mechanism performs very well, even when the cost to peers of donating a service is nearly as large as the utility gained by receiving it.

Mini-bio:
Miranda Mowbray is a Technical Contributor at Hewlett-Packard Laboratories, Bristol. She studied political philosophy in the United States before obtaining an MA in Mathematics from Cambridge University and a PhD in Algebra from London University. She co-founded e-mint, the UK Association of Online Community Professionals. Miranda is at present a principal investigator for peer-to-peer technologies at HP Labs.

View original page

23 January 16:15Privacy preserving censorship / Yvo Desmedt, Department of Computer Science, University College London

Lecture Theatre 2, Computer Laboratory, William Gates Building

In many Western countries information is being censored or plans are being made. In Australia, the Australian Communications Minister Helen Coonan has suggested to censor the internet TV program Big Brother. Moreover two books are censored. In Belgium the Information Minister Peter Vanvelthoven is looking into "censoring websites with illegal content or with illegal services" (translated from the official Belgian memorandum) or at least to "inform customers that they entered a black listed site". Critics remember that before 1966 it was hard in small Belgian villages to buy books that were on the Vatican "Index Librorum Prohibitorum" blacklist. Other examples of censorship in the West include the censorship: by the church of ``non-traditional'' gospels, Hitler's ``Mein Kampf'' in countries as France and Germany, and the Rolling Stones performance during the 2006 superbowl on 5 February 2006 in the US. Texts describing in details the construction of atomic bombs, or other classified information, are also censored.

Whether censorship is a benefit to mankind or not, is a non-scientific topic, and therefore not the focus of the presentation. In this talk we discuss methods that can be used to censor networks. A problem with a straightforward solution is that censorship techniques might be used by terrorist or hackers who want to perform a denial of service attack. We therefore analyze how telecommunication providers can guarantee privacy on how to censor (i.e. protecting against hackers with limited resources using it to perform a denial of service) while at the same time being able to demonstrate to the authorities the capability to censor. Above is impossible when using traditional models to describe network reliability. We discuss an alternative model in which it can be achieved. We propose a zero-knowledge interactive proof for this problem.

No background information is required to be able to understand most of the lecture. This presentation is based on joint work with Yongge Wang and Mike Burmester and presented at the First International Workshop on Critical Information Infrastructures Security.

SHORT BIO:

Yvo Desmedt received his Ph.D. (Summa cum Laude) from the University of Leuven, Belgium (1984). He is presently the BT Chair of Information Security at University College London, UK. He is also a courtesy professor at Florida State University. His interests include cryptography, network security and computer security. He is program chair of the Workshop on Information Theoretic Security 2007, was co-program chair of CANS 2005, program chair of PKC 2003, the 2002 ACM Workshop on Scientific Aspects of Cyber Terrorism and Crypto '94. He is editor-in-chief of the IEE Proceedings of Information Security, editor of the Journal of Computer Security, of Information Processing Letters and of Advances in Mathematics of Communications. He has given invited lectures at several conferences and workshop in 5 different continents.

2006

View original pageView slides

21 November 16:15Politics of Internet Security / Richard Allan, Cisco

Lecture Theatre 2, Computer Laboratory, William Gates Building

Based on his experience dealing with technology issues as a Member of Parliament, Richard will describe the UK political perspective on internet security. Using Parliamentary material, he will demonstrate that politicians are primarily concerned about the effects of "bad" content rather than threats to the technology infrastructure. And he will show how there is an increasing demand to use technical methods to
limit access to content. This will include an analysis of the political events following a high-profile murder which has led to measures to ban access to violent pornography via the internet which the government has
said it intends to introduce.

More information about Richard Allan can be found <a href='http://www.richardallan.org.uk/?page_id=381'>here</a>.

Slides are <a href='http://www.cl.cam.ac.uk/research/security/seminars/2006/2006-11-21.ppt'>here</a>.

View original page

31 October 16:15Optically enhanced position-locked power analysis / Sergei Skorobogatov, Computer Laboratory, University of Cambridge

Lecture Theatre 2, Computer Laboratory, William Gates Building

This talk introduces a refinement of the power-analysis attack on integrated circuits. By using a laser to illuminate a specific area on the chip surface, the current through an individual transistor can be made visible in the circuit's power trace. The photovoltaic effect converts light into a current that flows through a closed transistor. This way, the contribution of a single transistor to the overall supply current can be modulated by light. Compared to normal power-analysis attacks, the semi-invasive position-locking technique presented here gives attackers not only access to Hamming weights, but to individual bits of processed data. This technique is demonstrated on the SRAM array of a PIC16F84 microcontroller and reveals both which memory locations are being accessed, as well as their contents.

View original pageView slides/notes

24 October 16:15 Becoming paranoid or, How I learned to start worrying and fear the Internet / George Neville-Neil

Lecture Theatre 2, Computer Laboratory, William Gates Building

While traditional research and development in security continues to focus on algorithms and protocols for securing data privacy during storage and transmission, another battle is being waged that is far more broad. Many of the problems in building secure systems come not from designing cryptographic systems, but in building whole systems so that they avoid common errors. Input validation, protocol design, and good clean code are far and away the more important issues to the majority of people building systems today. It is not the brilliant basement hacker who finds most of the holes, but the casual script kiddie and others with far less specialized skills.

This presentation will give an overview of the security landscape as it appears from inside a large Internet company along with many specific cases of the kinds of security issues that are found on a day to day basis. The goal is to make people truly paranoid.

Speaker:

George Neville-Neil is a member of the application security team of a large Internet company with responsbilities that include system review, security tool authoring, and teaching about secure and fail safe programming. He has taught at development centers in Silicon Valley, Asia and Europe and routinely makes tours of international development centers to teach and address security concerns. He is the co-author of The Design and Implementation of the FreeBSD Operating System as well as a columnist for ACM Queue Magazine, where he writes under the name Kode Vicious. Mr. Neville-Neil's research interests include Networking, Operating Systems and Security. He currently makes his home in Tokyo Japan.

<a href='http://www.cl.cam.ac.uk/research/security/seminars/2006/2006-10-24.pdf'>Slides</a>

View original page

18 October 16:15The Polygraph / Marc Weber Tobias, Investigative Law Offices

Lecture Theatre 2, Computer Laboratory, William Gates Building

Marc Weber Tobias is an investigative attorney and a physical security expert in locks and safes. He was trained as both a lawyer and criminal investigator and has been a certified polygraph examiner for the past twenty years, employed by government agencies and private clients. He works in the United States and has conducted thousands of polygraph or lie detector examinations in both criminal and civil investigations involving cases of kidnapping and murder to employee theft from commercial businesses. The polygraph is utilized throughout the world by police and intelligence agencies for a variety of purposes, including the verification of statements by suspects and victims, plea bargains in criminal cases, and vetting of government employees and intelligence agents to obtain and maintain security clearances. Marc Tobias has worked several high-profile cases and in one investigation, he conducted the polygraph examination of the career criminal in Sweden that provided the gun that killed the prime minister of that country in 1986.

View original page

06 October 16:00Distance bounding protocols: Authentication logic analysis / Catherine Meadows, Naval Research Laboratory

Lecture Theatre 1, Computer Laboratory, William Gates Building

The analysis of cryptographic protocols is by now a well established application area for formal methods. However, there are many protocols that go beyond the traditional Dolev-Yao model for which these formal methods have been developed. In this talk we examine a particular class of such protocols, distance bounding protocols, designed to authenticate distance measurements in sensor networks. These rely not only on assumptions about the soundness of cryptographic functions, but on physical assumptions about the time of flight of signals. We adapt the authentication logic of Pavlovic, Meadows, and Cervesato to reason about these protocols by incorporating the physical assumptions necessary as axioms and definitions in the system, and apply it to the analysis of a family of distance bounding protocols. We also discuss the potential for adding probabilistic reasoning to the logic to better capture the necessary physical assumptions.

View original pageView slides/notes

26 September 16:15Privacy preserving data mining in distributed databases / Ehud Gudes, Department of Computer Science, Ben-Gurion University

Lecture Theatre 2, Computer Laboratory, William Gates Building

Privacy concerns have become an important issue in Data Mining. This seminar deals with the problem of association rule mining from distributed vertically partitioned data with the goal of preserving the confidentiality of each database. Each site holds some attributes of each transaction, and the sites wish to work together to find globally valid association rules without revealing individual transaction data. This problem occurs, for example, when the same users access several electronic shops purchasing different items in each, and the shops like to cooperate to obtain valid global rules without compromising their private databases.

In this talk, we first review the work on privacy based rules mining in both centralized and distributed databases, and in both vertically and horizontally pertitioned databases. We then present two algorithms for discovering frequent item sets and two algorithms for extracting the association rules. We analyze the security, privacy and complexity properties of the algorithms and compare them to the best known algorithms of Vaidya and Clifton.

View original page

19 September 16:15Daonity - Grid security with behaviour conformity from Trusted Computing / Wenbo Mao, HP Labs, China

Lecture Theatre 2, Computer Laboratory, William Gates Building

A central security requirement for grid computing can be referred to as behaviour conformity. This is an assurance that ad hoc related principals (users, platforms or instruments) forming a grid virtual organisation (VO) must each act in conformity with the rules for the VO constitution. Existing grid security practice has little means to enforce behaviour conformity and consequently falls short of satisfactory solutions to a number of problems.

Trusted Computing (TC) technology can add to grid computing the needed property of behaviour conformity. With TC using an essentially in-platform (trusted) third party, a principal can be imposed to have conformed behaviour and this fact can be reported to interested parties who may only need to be ad hoc related to the former. In this talk we report Daonity, a TC enabled emerging work in grid security standard, to manifest how behaviour conformity can help to improve grid security.

View original page

08 September 16:00Peer-to-peer network topologies and anonymity / Nikita Borisov, Electrical and Computer Engineering Department, University of Illinois at Urbana-Champaign

Lecture Theatre 2, Computer Laboratory, William Gates Building

Peer-to-peer networks, due to their decentralized construction, are a natural platform for anonymous communication and large-scale p2p networks may be the key to widespread deployment of anonymous communications technologies. In order to be scalable, however, p2p networks must maintain a limited view of the network, thereby creating a restricted topology graph of nodes that can communicate with each other. As all communication must follow paths within the graph, we study the information that can be learned about the origin of a path based on observing intermediate nodes. We use both graph models and simulations in our analysis.

In our work, we contrast structured networks, where the topology of the graph follows a mathematical model, and unstructured ones, where arbitrary connections can be made. Unstructured networks often develop an emergent power-law topology; we have found that such topologies are a detriment for anonymity because they have poor mixing (paths remaining correlated to their starting point after a large number of hops) and because the high-degree nodes can be subject to a targeted attack. We show that effective attacks against such networks can be carried out with only a moderate number of compromised nodes and without a global view of the network topology.

Structured networks, on the other hand, tend to have good mixing properties, and de Bruijn networks can be shown to achieve optimal mixing and therefore make an ideal candidate for anonymous p2p networks. We study the approximations to de Bruijn networks used in several p2p systems and show that they provide good anonymity on average, and acceptable anonymity in the worst case, even when the full topology of the network is known to the attackers.

View original page

25 July 16:15Milk or wine: does software security improve with age? / Andy Ozment, Computer Laboratory, University of Cambridge

Lecture Theatre 2, Computer Laboratory, William Gates Building

We examine the code base of the OpenBSD operating system to determine whether its security is increasing over time. We measure the rate at which new code has been introduced and the rate at which vulnerabilities have been reported over the last 7.5 years and fifteen versions. We learn that 61% of the lines of code in today's OpenBSD are foundational: they were introduced prior to the release of the initial version we studied and have not been altered since. We also learn that 62% of reported vulnerabilities were present when the study began and can also be considered to be foundational. We find strong statistical evidence of a decrease in the rate at which foundational vulnerabilities are being reported. However, this decrease is anything but brisk: foundational vulnerabilities have a median lifetime of at least 2.6 years. Finally, we examined the density of vulnerabilities in the code that was altered/introduced in each version. The densities ranged from 0 to 0.033 vulnerabilities reported per thousand lines of code. These densities will increase as more vulnerabilities are reported.

View original page

16 June 16:15Bluetooth Simple Pairing: public key cryptography in adhoc wireless systems / Robin Heydon and Steven Wenham, CSR

Lecture Theatre 2, Computer Laboratory, William Gates Building

Abstract not available

View original pageView slides

23 May 14:15Opening locks by bumping in five seconds or less: is it really a threat to physical security? / Marc Weber Tobias, Investigative Law Offices

Lecture Theatre 2, Computer Laboratory, William Gates Building

There are millions of pin tumbler locks in the world that provide the primary security for the consumer, business and government. The vast majority of these can be compromised in seconds with a minimal skill level and virtually no tools. The procedure is called "bumping" and was first developed in Denmark a quarter century ago, although the underlying theory of physics was in fact presented by Sir Isaac Newton over three centuries ago. Marc Weber Tobias presents an introduction to the technique of bumping and a detailed analysis of its real security threat.

View original page

19 May 16:00Network Security Monitoring / Richard Bejtlich, TaoSecurity

Room FW11, Computer Laboratory, William Gates Building

This presentation will introduce the tenets of network security monitoring (NSM) as defined and applied by Richard Bejtlich. Attendees will see how Bejtlich approaches incident detection and response by using statistical, session, full content, and alert data. The open source NSM suite Sguil (www.sguil.net) will be demonstrated via a free VMware image that attendees can try. Network-centric incident response and forensics issues will also be covered. Expect a lively discussion!

View original page

17 May 14:15CCTV in the UK: A failure of theory or a failure of practice? / Martin Gill, PRCI Ltd

Lecture Theatre 1, Computer Laboratory, William Gates Building

Although CCTV was heralded as something of a silver bullet in the fight against crime (and by two Governments) scholarly research has questioned the extent to which it 'works'. Martin Gill led the Home Office national evaluation on CCTV and has subsequently conducted more research with CCTV schemes across the country. In this talk he will outline the findings from the national evalaution and assess the views of the public, scheme workers and offenders' perspectives (including showing film clips of offenders talking at crime scenes) to show just why CCTV has not worked out as many considered. Martin will relate these findings to the current development of a national strategy.

View original pageView slides

16 May 16:15An overview of vulnerability research and exploitation / Peter Winter-Smith and Chris Anley, NGS Software

Lecture Theatre 2, Computer Laboratory, William Gates Building

Peter Winter-Smith and Chris Anley of NGS Software (a world leading security assessment company) are giving a presentation revolving around some of the techniques and methods used by NGS consultants when performing security assessment and vulnerability research. This involves both methodology and technique, tools and frameworks which have been used by NGS consultants in the past to discover a large number of vulnerabilities, both known and unknown at this present moment to the general public.

View original page

09 May 16:15On inverting the VMPC one-way function / Kamil Kulesza, University of Cambridge

Lecture Theatre 2, Computer Laboratory, William Gates Building

Informally speaking, one-way functions are functions for which it is "easy" to compute their values from their arguments but it is "computationally infeasible" to reverse them i.e. to find their arguments knowing their values. A rigorous definition of the terms "easy" and "computationally infeasible" is necessary but would detract from the simple idea that is being conveyed. Existence of one-way functions is only conjectured and closely connected with Cook's hypothesis. Roughly speaking, if P is not equal NP such functions should exist. Apart from their theoretical importance, one-way functions are fundamental for complexity based cryptography. The problem is being attacked in many ways and there are several instances which are perceived to be good candidates, for instance factorisation or discrete logarithm. There are also practical reasons to search for new candidates. We investigate the possibilities of inverting the VMPC one-way function, which was proposed at Fast Software Encryption 2004. (VMPC stands for Variably Modified Permutation Composition). First, we describe the function using the language of permutation theory. Next, easily invertible instances of VMPC are derived. We also show that no VMPC function is one-to-one. Implications of these results for cryptographic applications of VMPC conclude the presentation.

View original page

31 March 16:00Enhancing Signature-based Collaborative Spam Detection / Jeff Yan, University of Newcastle upon Tyne

FW11, Computer Laboratory, William Gates Building

To date, statistical spam filters are probably the most heavily studied, and most widely adopted technology for detecting junk emails. However, among other disadvantages, they fail to detect spam that cannot be predicated by machine learning algorithms on which they are based. Neither they identify spam that is sent in an image format. In addition, these filters need to be regularly trained, particularly when false positive occurs. Signature-based collaborative spam detection (SCSD) seems to provide a promising solution addressing all these problems. What is in particular attractive is that it can provide a reasonalbe solution to detect unforeseeable new spam, which intuitively appears to be mission impossible. In this talk, I will discuss reesarch issues in SCSD, and report our enhancements to two representative systems, Razor and DCC. One key problem addressed by us is that SCSD approaches usually rely on huge databases of email signatures (i.e., checksums), demanding lots of resource in signature lookup as well as signature database storage, transmission and merging. In our enhancements, signature lookups can be performed in O(1), i.e. constant time, independent of the number of signatures in the database. Space-efficient representation can reduce signature dababase size by a factor of 25.6 or more for Razor-style systems before any data compression algorithm is applied. A simple but efficient algorithm for merging different signature databases is also supported. If time allows, some ongoing work and open problems will also be discussed.

View original page

07 March 16:15Security Flaws in Tunnel Mode IPsec / Kenny Paterson, Royal Holloway, University of London

FW11, Computer Laboratory, William Gates Building

We present a variety of attacks that efficiently extract plaintext data from IP datagrams that are protected using the IPsec protocol ESP in tunnel mode. In contrast to earlier attacks of Bellovin, our attacks require only small amounts of time and network bandwidth to be successful. The attacks apply in situations where the IP packets are not integrity protected, or where integrity protection is supplied only by a higher layer protocol. While strongly discouraged by experts, these configurations of IPsec are still allowed by the relevant IPsec standards. In addition, we believe that these configurations may be widely used in practice. We report on successful implementation of the attacks against an IPsec VPN built using the native implementation of IPsec in Linux.

Joint work with Arnold K.L. Yau.

View original page

28 February 16:15Hiding on an Ethernet / Richard Clayton, University of Cambridge

Lecture Theatre 2, Computer Laboratory, William Gates Building

Traceability on the Internet is the process of determining who was using a particular IP address at a particular time. In this talk I will show how fuzzy this idea becomes at the edges of the network when users are on an Ethernet — a broadcast medium — where the notion of identity becomes a matter of agreement rather than immutable fact. The hacker community has long known about ARP spoofing; but I've found a new trick. As a part of my PhD work I built some hardware that permitted one machine to borrow someone else's IP address and Ethernet MAC address and thereby impersonate them, even when they were actively using their machine. Then, by chance, I found that I'd taken far too complicated an approach — and modern software firewalls, that are supposed to make you more secure — permit others to impersonate you with impunity. This has significant implications not only for traceability, but also for the builders of NATs, and especially for the business models of those who overcharge for their WiFi hotspots.

View original page

21 February 16:15Design and implementation of a CC CAPP-compliant audit subsystem for the Mac OS X and FreeBSD operating systems / Robert N M Watson, University of Cambridge

Lecture Theatre 2, Computer Laboratory, William Gates Building

Completing the Common Criteria CAPP (C2) security evaluation of Apple's Mac OS X operating system required the development of a significant new operating system feature, security event auditing. This facility provides for the fine-grained, configurable, and reliable logging of security events ranging from authentication events in user space to system call access control information throughout the kernel. As the leader for the team that implemented Audit for Apple, I had the opportunity to gain interesting insight into the evaluation requirements and process, as well as into the implementation implications of these requirements. This presentation will describe the requirements and how they have been implemented in traditional UNIX systems, as well as how some of the design decisions that make Mac OS X unique impacted the implementation of Audit. I'll also talk briefly about the later port of this source code base to the open source FreeBSD operating system, and the OpenBSM software package, which provides a portable implementation of the de facto industry standard BSM API and file format originally developed by Sun.

View original pageView slides/notes

31 January 16:15Covert channels in TCP/IP: attack and defence / Steven J. Murdoch, University of Cambridge

Lecture Theatre 2, Computer Laboratory, William Gates Building

This talk will show how idiosyncrasies in TCP/IP implementations can be used to reveal the use of several steganography schemes, and how they can be fixed. The analysis can even be extended to remotely identify the physical machine being used.

A number of steganography techniques have been designed to insert a covert channel into seemingly random TCP/IP fields, such as the IP ID, TCP initial sequence number (ISN) or the least significant bits of the TCP timestamp. While compliant with the TCP/IP specification, their output is unlike that an unmodified operating system would generate. This talk will show how by taking in account the implementation of the TCP/IP stack, a number of such specification-based steganography schemes can be broken. This includes Nushu, an ISN based scheme presented at 21C3.

Firstly the talk will introduce the field of covert channels and TCP/IP steganography in particular, giving an overview of the steganographic potential of different fields in the protocol. This will show that only the IP ID and TCP ISN can be plausibly used for steganography. The talk will then describe how these fields are generated, and how steganography schemes which do not properly take in account these algorithms can be detected.

The talk will then present improved TCP/IP steganography schemes for Linux and OpenBSD which, by deriving a reversible transformation from the standard TCP/IP stacks' implementation, make a much harder to detect covert channel. Such a scheme can be shown to be as strong as the underlying encryption, when attacked by an adversary monitoring packet content.

Finally, a side effect of the steganography detection system is to reveal microsecond-level deviations in the clock speed of the device being monitored. Clock-skew varies from computer to computer so can act as a fingerprint of a particular physical device. This talk will show how this fact can be used to track physical devices across the Internet, and how the use of TCP ISNs can improve over schemes based on TCP timestamps.

This work was done in conjunction with Stephen Lewis.

2005

View original page

30 November 16:15Orion: Named Flows with Access Control / Alexander (Sandy) Fraser, Fraser Research

Lecture Theatre 1, William Gates Building

Unix file system semantics, applied to the host/network interface for a wide area network, lead to a compact definition of a communications service and provide a versatile framework for privacy in computer communications. Flows are named connections between processes, and a network is a flow that contains other flows. Hierarchical design limits the scope of a name, and access permissions put limits on flow access. Services publish their names on the network. Pure clients, who by default have no need of a public name, are invisible and are not vulnerable to direct attack.

Processes communicate through Orion: a file system-like interface that hides details of network operation from applications and users alike. Many different implementations are possible, and can coexist behind this unifying interface. Not only is this architecture a substantial step towards a network that can evolve independently of its users, it is also a framework under which disparate internets can coexist behind a single user interface.

View original page

23 November 16:15Semantic Video Content Analysis for Security / Shaogang Gong, Queen Mary University of London

Lecture Theatre 1, William Gates Building

There is a huge demand for fully automated semantic video content analysis due to massive increase of video media in the last decade. However, there is also a lack of effective analytical tools to extract automatically the most relevant information in context and in good time, especially when dealing with CCTV video data of public space. Significantly, human attention span usually lasts no more than 15-20 minutes resulting in highly inconsistent and error-prone manual based content labelling and extraction of CCTV video. Furthermore, the lack of any structured script or embedded meta-data in security and surveillance video, as is present in most commercial and entertainment video, makes the task of automated semantic content analysis of such video data extremely difficult.

In this talk, I will present recent results on activity event and behaviour based video content analysis of security and surveillance video. I will highlight that some of the fundamental problems in security video content analysis are more than merely object tracking and trajectory matching. I will address the problem of modelling and recognising complex activities involving simultaneous movement of multiple overlapped objects. Dynamic probabilistic graph models are exploited for modelling the temporal relationships among a set of different object temporal events and are used to profile and index salient event and behaviour patterns captured in CCTV video, and for the detection of atypical and abnormal behaviours. I will also briefly discuss the problem of extracting and synthesising high-resolution image patches of saliency in low-resolution CCTV content under motion blur, especially in the context of face recognition in low-resolution CCTV video.

*
Shaogang Gong is Professor of Visual Computation at Queen Mary, University of London, elected a Fellow of the Institution of Electrical Engineers, a member of the UK Computing Research Committee, and Head of Queen Mary Computer Vision Research Group he founded in 1993. He received his DPhil in computer vision from Oxford in 1989 with a thesis on the computation of optic flow using second-order geometric analysis. He is a recipient of a Queen's Research Scientist Award in 1987, a Royal Society Research Fellow in 1987 & 1988, and a GEC-Oxford fellow in 1989. He twice won the Best Science Prize of the British Machine Vision Conferences (1999 and 2001) and once won the Best Paper Award (2001) of the IEEE International Workshops on Recognition, Analysis and Tracking of Faces and Gestures. He is the principal author of a book on Dynamic Vision: From Images to Face Recognition (Imperial College Press, 2000). His work focuses on visual motion and video analysis with applications to the detection, tracking and recognition of vehicles and human objects, activity profiling, behaviour recognition and abnormality detection in CCTV & live video. A current significant focus is in security for crime prevention and detection funded by the MOD, EPSRC, DTI and industry.

View original pageView slides/notes

15 November 16:15Prêt à Voter; Practical, Voter-verifiable Elections / Peter Ryan, Newcastle University

Lecture Theatre 2, William Gates Building

Voting systems provide the bedrock of democracy. Recently, voting systems and technologies have been the subject of considerable attention, for example, the concerns raised about the legitimacy of the 2000 and 2004 US presidential elections or about postal voting in this country. Designing voting technologies and systems that are trustworthy, practical and acceptable to the various stakeholders (electorate, politicians, election officials, security experts etc.) raises formidable challenges.

In this talk I will describe the Prêt à Voter scheme. This scheme, based on an earlier scheme due to Chaum, has the surprising property of voter-verifiability: voters can confirm that their vote is accurately included in the tally, whilst at the same time preserving ballot secrecy. This is achieved with minimal dependence on components of the system by providing maximal transparency within the constraints of ballot secrecy.

I will discuss some of the assumptions underlying the current scheme, and associated potential vulnerabilities, and describe possible countermeasures. I will also describe coercion-resistant adaptations of the original, supervised scheme to the remote voting context.

slides

View original page

25 October 16:15Addressing the Data Problem: Investigating Computer Crime / Ian Walden, Centre for Commercial Law Studies, Queen Mary and Westfield College, University of London

Lecture Theatre 2, William Gates Building

When cybercrimes crimes are carried out, the ability of law enforcement agencies to investigate and prosecute the perpetrators will be driven by the availability and accessibility of data to the investigators, whether as intelligence gathering, evidential retrieval or subsequent analysis and presentation. Any criminal investigation interferes with the rights of others, whether the person is the subject of an investigation or a related third party. In a democratic society any such interference must be justifiable and proportionate to the needs of society to be protected. This presentation will consider the problems raised by data for law enforcement agencies investigating cybercrime. It will examine recent legislative measures and proposals in the UK and Europe to address some of these problems of criminal procedure and the extent to which such measures achieve an appropriate balance between potentially conflicting interests.

View original page

21 October 16:00Nuclear Weapons, Permissive Action Links, and the History of Public Key Cryptography / Steve Bellovin, Columbia University

Lecture Theatre 2, William Gates Building

From a security perspective, command and control of nuclear weapons presents a challenge. The security mechanisms are supposed to be so good that they're impossible to bypass. But how do they work? Beyond that, there are reports linking these mechanisms to the early history of public key cryptography. We'll explore the documented history of both fields, and speculate on just how permissive action links — the "combination locks" on nuclear weapons — actually work.

View original page

21 October 14:00Xilinx Virtex Bitstream Security / Steve Trimberger, Xilinx

Lecture Theatre 2, William Gates Building

Memory-programmed FPGAs are loaded on power-up from an external non-volatile memory. An attacker can intercept the bitstream at that point, modify it, reverse-engineer it or make unauthorized copies of it. Since the introduction of Virtex-II, Xilinx has offered the option to encrypt bitstreams to ensure data privacy. This presentation describes the design decisions, features and restrictions of the Virtex-II bitstream security.

Biography:

Steve Trimberger received his PhD from Caltech at the dawn of the VLSI era, working with Carver Mead and Ivan Sutherland at Caltech, and Lynn Conway and Doug Fairbairn at Xerox PARC. Dr. Trimberger was a member of the original Design Technology group at VLSI Technology and joined Xilinx in 1988.

At Xilinx, Dr. Trimberger was a member of the architecture definition group for the Xilinx XC4000 FPGA and the technical leader for the XC4000 design automation software. He led the architecture definition group for the Xilinx XC4000X device families. He managed the Xilinx Advanced Development group for many years and is currently Distinguished Engineer in Xilinx Research Labs in San Jose where he leads the Circuits and Architectures Group. His research interests include low-power FPGAs, novel uses of reconfiguration, and cryptography.

Dr. Trimberger has written three books and dozens of papers on design automation and FPGA architectures. He is an inventor on more than one hundred patents in the fields of integrated circuit design, FPGA and ASIC architecture, CAE and cryptography. He has served as Design Methods Chair for the Design Automation Conference, Program Chair and General Chair for the ACM/SIGDA FPGA Symposium and on the technical programs of numerous Workshops and Symposia.

View original page

12 October 16:15Natural Randomness as a Fingerprint: Using Nanotechnology to Fight Counterfeiting / Russell Cowburn, Imperial College

Lecture Theatre 1, William Gates Building

We have found [1] that almost all paper documents, plastic cards and product packaging contain a unique physical identity code formed from naturally-occurring microscopic imperfections in the surface. This covert 'fingerprint' is intrinsic, robust and virtually impossible to modify controllably. It can be considered as a biometric identifier for inanimate objects. It can be rapidly read using a low-cost portable laser scanner, which uses the physics of laser speckle in order to probe the surface with sub-micrometre accuracy. Many forms of document and branded-product fraud could be rendered obsolete by use of this code.

[1] Nature 436, 475 (2005)

* Russell Cowburn obtained his PhD in condensed matter physics from the University of Cambridge in 1996. He then joined the Nanoscale Science Group in Cambridge University Engineering Department, where he worked as a post-doc for 1 year and as a Research Fellow of St John's College for 3 years, before being appointed to a faculty position at the University of Durham in 2000. In January 2005 he became Professor of Nanotechnology in the Department of Physics at Imperial College London, where he leads a large research group studying applications of nanotechnology to computer memory, cancer treatment and fraud prevention. He is Director of two high technology spin-out companies working in the area of nanotechnology.

View original page

12 May 16:15Inoculating SSH Against Address-Harvesting Worms / Stuart Schechter, MIT

Lecture Theatre 1, William Gates Building

Over the past year, attacks on SSH have compromised major supercomputing facilities, educational institutions, and national laboratories. These attacks have proven inadequate our current mechanisms for authenticating users and then isolating them from each other.

I will describe the mechanisms that have been used to attack SSH and other remote execution mechanisms, and then present data to help explain why these attacks have been so successful. I will describe countermeasures that can be used to make SSH more resilient to some of these attacks. However, other attacks require us to rethink our entire approach to authenticating ourselves to remote hosts and services and authorizing other hosts to perform tasks on our behalf.

View original page

19 April 14:30Sensor Network Security / Dan Cvrcek, University of Cambridge

Room FW26, William Gates Building

Wireless sensor networks represent an interesting environment for a number of problems related to distributed systems. They have got specific restrictions (power consumption), unusual routing requirements (nodes/motes have no idea about the network topology when deployed), and the information produced by nodes gains value when aggregated, a space for new security protocols exist. We have put some effort into simulating security of key agreement protocols against an attacker controlling only a fraction of the network (key infection, secrecy amplification). The talk will briefly survey several existing key management schemes and highlight some interesting results we have obtained for key infection protocols.

View original page

31 March 16:15Cybersecurity - What Can We Do About It? / Chuck Pfleeger, Pfleeger Consulting Group

Room FW11, William Gates Building

We are reasonably effective at catching the 80-90% of simple cyber-attacks. But what about the others? What about the sophisticated attackers who might plant an exploit today, with a view to reaping the rewards in five years.

View original page

15 March 16:15Certificate Management Using Distributed Trusted Third Parties / Alex Dent, Information Security Group, Royal Holloway, University of London

Lecture Theatre 2, William Gates Building

Trust is a key component in any ubiquitous computing system. Users have to trust the devices to be secure, devices have to authenticate the users in order to trust their inputs and devices have to trust each others' identity and authorisation. A central question in dealing with trust is how to distribute copies of a user's public key in such a way that other users can verify that it does, indeed, belong to the user that claims ownership. Traditional answers to this question have involved using a trusted Certificate Authority (CA) to generate and distribute digitally signed certificates that bind a user's name to his public key (and any other data that may be required). However, the centralised CA model is particularly unsuited to the rapidly changing, ad hoc network topologies that are associated with ubiquitous computing environments.

Our solution to the problem of running a CA in a ubiquitous computing environment is to allow every user in that environment to download a ``CA applet'' – a self-contained application that will run on the user's SEE and will issue certificates for that user's public keys (and, potentially, other users that have been authorised by a pre-determined policy). Furthermore, that applet may, optionally, take the role of the directory service and make these certificates available to other network users. Hence, these CA applets may be placed anywhere within a network's topology, as required by either the user or by some sort of controlling entity.

This talk discusses methods whereby a CA-applet scheme can be implemented, the situations where it might be useful to do so and the problems that are present with this approach.

View original page

1 March 16:15Embedded devices as an attack vector / Stephen Lewis, University of Cambridge

Lecture Theatre 2, William Gates Building

The use of embedded devices present on a network as a vector for attacks against endstations is a threat that has not yet been realized, despite the knowledge of a number of vulnerabilities affecting such devices. This is probably due to the resistance of such devices to reverse engineering: they frequently run custom operating systems on obscure architectures.

Using embedded devices as a vector for attack does, however, have two significant advantages:

  • Detection of the code running on the embedded device is much harder than it would be on a general purpose computer: few tools are available, and a severely limited interface is presented to the end user
  • Embedded devices in the form of network infrastructure provide an excellent platform for attack, because they are ideally placed for covert monitoring and insertion of traffic

When hard-to-detect malicious code can be uploaded to embedded devices on a network, a number of different attacks become feasible. A packet sniffer running on a network switch itself could be used to forward packets matching a particular signature to a third party. Packets could also be generated on the device itself, perhaps in order to mount attacks on end-systems. An attack mounted in this manner would be far harder to contain than one initiated from an normal PC, especially if the ability to reflash the firmware on the device were disabled by the inserted code.

I am currently working on reverse engineering the firmware present in a widely-used switch based around a Motorola 68EC020 processor, and aim to present a demonstration of the insertion of custom code into this device.

View original page

15 February 16:15The Convergence of Anti-Counterfeiting and Computer Security / Steven J. Murdoch, University of Cambridge

Lecture Theatre 2, William Gates Building

Since January 2004, many major graphics software and hardware manufacturers have included anti-counterfeiting measures in their products (including Adobe Photoshop, JASC Paint Shop Pro, HP Printers and Canon scanners). The feature operates by detecting characteristics of banknotes and preventing a suspicious image from being processed. The software is developed by the G10 Central Bank Counterfeit Deterrence Group and provided to manufacturers as a compiled library. No details of the what features the system detects are publicly available, and it has been established that it does not use the same counterfeit-deterrence technique used in colour photocopiers.

Firstly the lecture will include background information on existing counterfeit deterrence systems, designed to prevent currency being copied on conventional printing equipment. This will move on to the more modern techniques, developed in reaction to the widespread deployment of high-quality digital printing hardware. Also the field of digital watermarking will be introduced and its relationship to counterfeit deterrence discussed. The lecture will cover the progress of a project to understand the currency detection feature, and reverse engineer it. This includes conventional reverse-engineering techniques such as disassembly and dynamic code analysis, but it will also describe application specific tools, such as black box digital watermark benchmarking.

Finally, proposed EU legislation will make the inclusion of such a system mandatory, so the consequences on Free and open source software will be discussed. These are in addition to conventional DRM problems such as prevention of legal manipulation of currency images, and other problems specific to counterfeit deterrence.

View original page

18 January 16:15Mixnets for Electronic Voting / Ben Adida, MIT

Lecture Theatre 2, William Gates Building

Voting is a peculiar security problem, with seemingly contradictory requirements of anonymity and verifiability. One important tool in the fulfilment of these requirements is the verifiable mixnet. This talk reviews the high-level challenges of election protocols, the specific trend of verifiable mixnets used in these protocols, and the current challenges that we are trying to address, particularly with respect to the rapid delivery of verified elections results.

2004

View original page

13 December 17:15National Security on the Line: Electronic Communications in an Age of Terror / Susan Landau, Sun Microsystems Laboratories

Lecture Theatre 2, William Gates Building

Wiretaps have been an element of U.S. law-enforcement and foreign-intelligence investigations for over a quarter century. During this period, communications technology has substantially changed. Law enforcement has sought to keep laws current with the new technology. But new technology brings new threats and it is not clear that the FBI's latest efforts to extend the Communications Assistance for Law Enforcement Act (CALEA) to Voice over IP would actually improve the total security equation. In this talk, we discuss national-security and law-enforcement wiretapping and the Internet, and what security means in this context.

View original page

23 November 16:15Questioning the Usefulness of Identity-based Key Cryptography / Yvo Desmedt, UCL

Since Boneh-Franklin's 2001 paper on "Identity based encryption from the Weil pairing," the research on identity based cryptography and the work on applying bilinear maps to cryptography are both flourishing. Shamir, in 1984, proposed the idea of "identity-based" cryptography to avoid a Public Key Infrastructure. Instead of having the users have their own public key, the identity of the user is the "public key," and a trusted center provides each party with a secret key.

We critically analyze whether Shamir's identity-based concept allows us to avoid a public key infrastructure. We argue the need for at least a registration infrastructure, which we call a "basic Identity-based Key Infrastructure." Moreover, we demonstrate that, if secret keys of users can be stolen or lost, the infrastructure required to deal with this is as complex as the one of PKI. Our discussion extends to the case the traditional PKI is replaced by an on-line PKI, as introduced by Rivest (1998).

We conclude by surveying possible useful applications of identity-based cryptography. Note: no number theory will be used in this lecture.

16 November 16:15Detection of LSB Matching Steganography in Images / Andrew Ker, Oxford University Computer Laboratory

View original page

26 October 16:00Data remanence in non-volatile semiconductor memories. Part I: Introduction and non-invasive approach / Sergei Skorobogatov, University of Cambridge

Security protection in microcontrollers and smartcards with EEPROM/Flash memories is based on the assumption that information from the memory disappears completely after erasing. Chip manufacturers were very successful in making their hardware design very robust to all sorts of attacks. But they had a common problem of data remanence in floating gate transistors. The information stored inside a EEPROM/Flash cell in the form of a charge on the floating gate changes some parameters of the storage transistor, so that even after an erase operation the transistor does not get back to its initial state, thereby allowing the attacker to distinguish between previously programmed and not programmed transistors and restore the information from the erased memory. In practice the attack can be done in different ways. The cheapest way is to measure the parameters of the transistor non-invasively by observing voltage and time dependant characteristics of each memory cell inside the array. Fortunately for security, this only works with a very limited number of chips. However, the fact that the information does not disappear completely after the memory erase, forces developers to implement additional protection. This talk summarises the research done in this direction so far and shows how much information can be extracted from some Microchip PIC microcontrollers after their memory has been 'erased'.

View original page

15 October 16:30Exploiting the Transients of Adaptation for RoQ Attacks on Internet Resources / Azer Bestavros, Boston University Computer Science Department

Over the past few years, Denial of Service (DoS) attacks have emerged as a serious vulnerability for almost every Internet service. An adversary bent on limiting access to a network resource could simply marshal enough client machines to bring down an Internet service by subjecting it to sustained levels of demand that far exceed its capacity, making that service incapable of adequately responding to legitimate requests. In this talk I will expose a different, but potentially more malignant adversarial attack that exploits the transients of a system's adaptive behavior, as opposed to its limited steady-state capacity. In particular, I will show that a determined adversary could bleed an adaptive system's capacity or significantly reduce its service quality by subjecting it to an unsuspicious, low-intensity (but well orchestrated and timed) request stream that causes the system to become very inefficient, or unstable. I will give examples of such "Reduction of Quality" (RoQ) attacks on a number of common adaptive components in modern computing and networking systems. RoQ attacks stand in sharp contrast to traditional brute-force, sustained high-rate DoS attacks, as well as recently proposed "shrew" attacks that exploit specific protocol settings. I will present numerical and simulation results, which are validated with observations from real Internet experiments.

This work was done in collaboration with Mina Guirguis and Ibrahim Matta.

View original page

28 September 16:45AUTODAFÉ: An act of software torture / Martin Vuagnoux, Ecole Polytechnique Fédérale de Lausanne

In his 1950 paper "Computing Machinery and Intelligence", Turing highlighted, for the first time, the risks of bad input validation in software. The problem has not gone away. Buffer overflows, which account for a third of the vulnerabilities discovered in the past decade, are today the best studied example.

Automatic vulnerability-search tools have lead to an explosion in the rate at which such flaws are discovered today. One particular technique is fault injection, the insertion of random, atypical data into input files or protocol packets, combined with monitoring memory violations. Existing tools for this are still rather crude. Their success is more testimony to the high density of flaws in fielded software than the result of good test coverage. This talk presents a new optimized approach for performing such "fuzzing" tests and will include a demonstration of the "Autodafé" tool that implements it.

View original page

26 July 16:15Threats to Privacy from Passive Internet Traffic Monitoring / Brian Levine, University of Massachusetts

With widespread acceptance of the Internet as a public medium for communication and information retrieval, there has been rising concern that the personal privacy of users can be eroded by malicious persons monitoring the network.

A technical solution to maintaining privacy is to provide anonymity. There have been a number of protocols proposed for anonymous network communication. We show there exist attacks based on passive traffic monitoring that degrade the anonymity of all existing protocols. We use this result to place an upper bound on how long existing protocols, including Crowds, Onion Routing, Mix-nets, and DC-Net, can maintain anonymity in the face of the attacks described. �This provides an analytical measure by we can compare the efficacy of all protocols. Our analytical bounds are supported by tighter results from simulations, and we made empirical measurements of our assumptions. We found that mix-based protocols offer the best tradeoff of performance and security.

In our most recent work, we have looked at attacks to detect signatures of users and webservers that persist over days or weeks. VPNs created by ssh tunnels or secure wireless connections (e.g., WEP) as implemented are not sufficient to block these signatures, even though they provide more protection than SSL-based connections that have been looked at previously for the same problem. We designed an attack and evaluated it with real Internet measurements: allowed a training period, we found an attacker could guess which exact web site (in the training set) was visited by a user through an encrypted link almost 40% of the time; 70% of the time the correct answer was in the attacker�s top five guesses. (A random guess had less than 1% chance of success.)

View original page

15 July 16:15Cybersecurity and Its Limitations / Andrew Odlyzko, University of Minnesota Digital Technology Centre

Network security is terrible, and we are constantly threatened with the prospect of imminent doom. Yet such warnings have been common for the last two decades. In spite of that, the situation has not gotten any better. On the other hand, there have not been any great disasters either. To understand this paradox, we need to consider not just the technology, but also the economics, sociology, and psychology of security. Any technology that requires care from millions of people, most very unsophisticated in technical issues, will be limited in its effectiveness by what those people are willing and able to do. The interactions of human society and human nature suggest that security will continue being applied as an afterthought. We will have to put up with the equivalent of baling wire and chewing gum, and to live on the edge of intolerable frustration. However, that is not likely to block development and deployment of information technology, because of the non-technological protection mechanisms in our society.

Slides are here.

8 June 16:15Privacy Protection in Ubiquitous Computing / Alf Zugenmaier, Microsoft Research, Cambridge

View original page

4 May 16:15Ubiquitous Utopia: Evolution, opportunities and security challenges / Chan Yeob Yeun, Toshiba Research Europe, Bristol

I will discuss the evolution of ubiquitous computing. Future ubiquitous communications systems will enable interaction between an increasingly diverse range of devices, both mobile and fixed. This will allow users to construct their own ubiquitous services using a combination of different communications technologies. Dynamic, heterogeneous and distributed networks will create new opportunities, such as the convergence of communications and highly adaptive reconfigurable terminals. They will also bring new challenges. I will discuss the particular problems involved in securing such ubiquitous environments. My goal is to establish a series of requirements for future security architectures, and future directions that might lead towards the ubiquitous utopia.

View original page

25 March 16:15Engineering a distributed hash table / Frans Kaashoek, MIT

Distributed hash tables (DHTs) are a popular approach to building large-scale distributed applications in the research community. They store data with high availability and they allow data to be looked up quickly, even when nodes are leaving and joining the system at a high rate. DHTs are also decentralized, requiring no organization to be in charge of the management. Only a few operational DHTs exist, however, because most research has focused on the design of the lookup protocol to find data in DHT. We have found that given enough network bandwidth every lookup protocol can be made to work well; the real challenge in designing a distributed hash table is engineering the details. This talk summarizes our experience with engineering the Chord distributed hash table. Joint work with: Frank Dabek, Jinyang Li, Robert Morris, Emil Sit, and Jeremy Stribling.

View original page

18 March 16:15Why Internet voting is insecure: a case study / Barbara Simons, ACM

The U.S. Department of Defense had been planning to run an Internet-based voting "experiment" called SERVE (Secure Electronic Registration and Voting Experiment) for the 2004 presidential primaries and general election. In order to evaluate the security of SERVE, a group of computer scientists was asked to review the program. On Jan. 21, 2004 four members of the review panel, including the speaker, produced a report, available at www.servesecurityreport.org, that analyzed the security risks of SERVE and called for SERVE to be shut down. On Feb. 3, 2004, the Department of Defense cancelled SERVE.

In this talk I shall discuss the security problems with Internet voting in general and SERVE in particular. If time permits, I'll also discuss some vulnerabilities of other forms of voting such as paperless touch screen machines.

Speaker:

Barbara Simons is a technology policy consultant. She earned her Ph.D. from U.C. Berkeley, and was a computer science researcher at IBM Research, where she worked on compiler optimization, algorithm analysis, and scheduling theory. A former President of the Association for Computing Machinery (ACM), Simons co-chairs the ACM's US Public Policy Committee (USACM). She served on the NSF panel on Internet Voting, the President's Export Council's Subcommittee on Encryption, and the President's Council on the Year 2000 Conversion. She is on several Boards of Directors, including the U.C. Berkeley Engineering Fund and the Electronic Privacy Information Center, as well as the Advisory Board of the Oxford Internet Institute and the Public Interest Registry's .ORG Advisory Council. She has testified before both the U.S. and the California legislatures. She is a Fellow of ACM and the American Association for the Advancement of Science. She received the Alumnus of the Year Award from the Berkeley Computer Science Department, the Norbert Wiener Award from CPSR, the Outstanding Contribution Award from ACM, and the Pioneer Award from EFF.

View original page

16 March 16:15On the anonymity of anonymity systems / Andrei Serjantov, Computer Lab

The speaker will talk about anonymous communication systems and the relatively new field of analysis of their anonymity properties. He will introduce the subject, look at some of the ways of achieving anonymous communications, define the requirements and threat models, and then talk about a few of the methods used in their analysis.

View original page

9 March 16:15Location privacy / Alastair Beresford, Laboratory for Communication Engineering, University of Cambridge

Privacy of personal location information is becoming an increasingly important issue. This talk discusses some of the challenges of providing location privacy whilst at the same permitting location-based services to function. Most methods of enabling location privacy in the literature use access control; this talk introduces the mix zone model which takes a different approach, enabling location privacy through anonymisation. A mathematical model is developed to provide a quantitative measure of anonymity and a method of providing direct feedback to the user is discussed.

slides

View original page

17 February 16:15The traffic analysis of anonymity systems / George Danezis, Computer Lab

In anonymous communications, as in other fields of computer security, the study of attack and defence go hand in hand. It might therefore seem strange that, until recently, the study of "traffic analysis" has not attracted a lot of attention. In this talk, recent quantitative breakthroughs are presented in understanding how traffic analysis is performed. They are used to quantify the cost of attacking generic anonymous communication systems. The focus then shifts towards high-bandwidth low-latency systems like "onion routing". We show how the features remaining in the anonymised streams of traffic can be used to trace them, and provide techniques that scale to de-anonymise whole networks.

View original page

10 February 16:15A monster emerges from the Chrysalis / Mike Bond, Computer Lab

The speaker has spent some time developing Security API attacks that trick hardware security modules (HSMs) into revealing their secrets by sending unusual sequences of commands to their published APIs. But how hard is it to phyiscally open up the device, and "walk in the front door"? This talk describes the speaker's experiences reverse-engineering the 'Luna CA3'. The Luna CA3 is a Hardware Security Module manufactured by Chrysalis-ITS, used in Certification Authorities all over the world. The talk begins with an informal recounting of how the reverse-engineering process progressed, and the various challenges arising on the way. It then explains the results: the exploitation of the internal API to defeat manufacturer lock-in, and identification of the weak spots for more serious attacks which may lead to full compromise. It concludes by looking at the lessons learned from a direct attack on an HSM.

View original page

3 February 16:15Extrusion detection / Richard Clayton, Computer Lab

End users are often unaware that their systems have been compromised and are being used to relay bulk unsolicited email (spam). However, automated processing of the email logs recorded on the "smarthost" provided by an ISP for their customer's outgoing email can be used to detect this activity. These logs do not contain any of the content of the email, or even the subject lines. However, the variability and obfuscation of sender and receiver that is used by spammers to avoid detection at the destination creates distinctive patterns at the source that permits legitimate email traffic to be distinguished from spam. Some relatively simple heuristics result in the detection of low numbers of "false positives" despite tuning to ensure few "false negatives".

2003

View original page

27 January 16:15Human factors and security – Beyond the interface / M. Angela Sasse, University College London

Many security researchers and practitioners treat usability of security as a user interface (UI) problem. It is no co-incidence that the most widely known and cited paper on usability and security is Whitten & Tygar's "Why Johnny Can't Encrypt", a study of the user interface to PGP 5.0. Whilst there is no argument that many UIs to security tools are unusable, and that unusable UIs are bad for usability and security, I will argue that there are other pressing usability issues that need to be addressed. For instance:

  • Users often bypass security mechanisms because they interfere with production tasks.
  • Users often bypass security mechanisms because they behaviour that conflicts with their values and social norms.
  • In many organisations, there is a discrepancy between security policies and security behaviour, which leads to a deteriorating security culture.
  • The complexity of current security systems creates problems – and fosters bad decisions – not just among end-users, but other – technically able – stakeholders, such system administrators and software developers.

In conclusion, I will put forward a research agenda for usable and effective security.

Speaker:

M. Angela Sasse is the Professor of Human-Centred Technology in the Department of Computer Science at University College London. Since 1996, she has been researching usability issues of security systems in collaboration with a number of Ph.D. students, and published research on effectiveness and usability of authentication mechanisms, user attitudes and perceptions to computer security, and human and financial cost of security mechanisms, and related work on user-centred approaches to trust and privacy.

View original page

2 December 16:15Faster hardware designs for modular arithmetic / Martin Kochanski

A refreshing thing about modern number-theoretic cryptography is that it shows how bad at sums computers really are. Even the most advanced primary-school techniques of long multiplication and long division cannot provide useful speeds when faced with 300-digit modular exponentiations.

This talk will cover the problems of designing hardware for large-integer arithmetic and the ways round them, and will describe a new design for a modular multiplication chip.

Long division is made of subtractions and it needs the result of each subtraction when deciding what to do next; but in silicon, binary subtraction (like addition) is an inescapably slow operation. The algorithm described here takes a ruthless approach: don't get it right slowly, get it wrong fast; and hope that the resulting errors (which double on every clock tick) will be noticeable before they are too large to correct. This balancing act leads to a design that is fast, economical in silicon, easily verifiable, and, unusually in this field, is as efficient for modular multiplication as it is for modular exponentiation.

Speaker:

Martin Kochanski is the inventor of Cardbox, a respected and widely used flat-file text database for DOS and Windows. He has been involved in cryptography since 1979, breaking several commercial encryption products as well as the Lu-Lee public-key cryptosystem; he has also designed and implemented FAP4, the world's first commercially available RSA encryption chip. He is the publisher of Universalis, which provides the daily Liturgy of the Hours through the Web, on palmtops, and through mobile phones.

paper

View original pageView slides/notes

25 November 16:15Latest trends in serious and organised identity fraud / Gareth Jones, Experian

Your identity is your most valuable asset. It is the key to unlocking your rights, rewards and privileges, qualifications, employment opportunities, citizenship and trust, medical history, benefits and reputation. Albeit intangible, it clearly has a high value, and therefore it is no surprise that identity fraud is one of the UK's fastest growing crimes. It leaves in it's wake considerable disruption for consumers to regain their identity, and significant losses to business.

In this lecture, Gareth Jones, a former Detective Sergeant, with experience in managing fraud risks in banking and currently directing the development of fraud prevention products for Experian – the UK's largest consumer credit reference agency, will cover:

  • The methodology used by the fraudsters with reference to case examples
  • The impact of the fraud in terms of value of loss and spread of victims
  • Good practice in the management of mass-multiple fraud cases of this sort
  • Gaps in the fraud detection process that could be improved upon
  • Opportunities for fraud prevention
  • Taking care of the victim

View original page

19 November 16:15Reasoning about VPN Integrity / Tim Griffin, Intel Research Lab

Virtual Private Networks (VPNs) should provide users with the isolation and security associated with private networks, but at a lower cost made possible by the use of a shared infrastructure. One type of VPN currently enjoying wide deployment is described in RFC 2547. From the customer's point of view, RFC 2547 VPNs represent an outsourcing of routing to Internet Service Providers (ISPs). From the ISP's perspective, this represents (at long last) a chance to "add value" to IP services. However, it also represents a network configuration nightmare. I'll talk about one attempt to tame the complexity of these VPNs using network invariants - maintained by bits of implementation - that can be composed to reason about the global correctness of VPN various implementations. The approach quickly reveals some rather nasty problems with RFC 2547 VPNs. I'll mention these and a few possible fixes.

* Dr Tim Griffin has recently joined the Intel Research Laboratory at Cambridge. He previously worked at AT&T research investigating network management. He also has research interests in databases and programming languages.

View original page

12 November 16:15Security and complexity / Andrew Cormack, UKERNA

The media term "hacking" covers a very wide range of activities. Networked computers are subject to many different types of attack at many different technical levels and with many different motivations. Defending against such diverse threats is likely to require similarly diverse measures. This talk will examine the current threats and the measures that can be taken to defend against them, and discuss how increases in the scale and complexity of computer systems may affect the balance between attack and defence.

View original page

11 November 16:15Implementation of the Regulation of Investigatory Powers Act 2000 (RIPA) / Simon Watkin, UK Home Office

Simon Watkin will share his unique perspective of the Government's progress towards full implementation of RIPA. He will recall the conception of RIPA, describe how the imposition of regulation on public authorities' surveillance of communications data was derailed and explain the effect of the RIPA Statutory Instruments which Parliament is being invited to approve. He will also explain what has happened to Part III of RIPA. Finally he will describe what he is doing to review how best the Government can ensure respect for individual privacy and, at the same time, protect the public from crime and terrorism.

Speaker:

Simon Watkin joined the Home Office's Covert Investigation Policy Team in September 2002 from David Blunkett's Private Office where he was a Private Secretary. He was nominated as an Internet Hero at the UK Internet Industry Awards 2003 for "doing his best to understand the industry, tech sector interest groups and experts and to subsequently inform discussions within the Home Office".

He worked on implementation of the recommendations of the Cabinet Office Performance and Innovation Unit report on Encryption and Law Enforcement, and on the development of the National Technical Assistance Centre. In 2001 he established the Home Office's Hi-Tech Crime Team assessing the impact of new technologies upon law enforcement capabilities.

View original page

5 November 16:15Elliptic curve cryptography / Nigel Smart, University of Bristol

I will discuss elliptic curve cryptography and how it is used in a traditional public key setting. I will go on to explain some of the attacks against such systems and then show how the existance of such attacks can be used to develop new identity based encryption and signature protocols.

View original page

4 November 16:15A flexible, model-driven security framework for distributed systems / Ulrich Lang, ObjectSecurity Ltd.

The proliferation of different distributed systems platforms and security technologies complicates the integration of distributed applications. Model driven software development tries to tackle this problem by modelling the application logic undistorted by technology and using tools to map the model to the particular technology. Distributed systems security faces a similar challenge in that there are many different platforms and security technologies that need to be integrated.

This talk will present our new security framework. Its central part is the policy repository, which stores the platform-independent security policy. Once the framework is integrated, the mapping from the abstract policy to the concrete enforcement, as well as the translation of technology specific security information into abstract security attributes is automatic. We will illustrate our approach using our prototype implementation and an exemplary integration with the CORBA Component Model, which are currently being implemented as part of an EU-IST research project.

Speaker:

Ulrich Lang is co-founder and research director of ObjectSecurity Ltd., a leading IT security specialist company. He received his Ph.D. from the University of Cambridge (Security Group, Computer Laboratory) in 2003. His dissertation was about conceptual aspects of security policies for middleware. Before that he completed a Master's Degree (M.Sc.) in Information Security at the University of London in 1997, after studying computer science with management at the University of Munich and at Royal Holloway College (University of London). After his M.Sc. graduation, he worked as an independent security consultant on various CORBA based banking projects. He is the author of a book on Developing Secure Distributed Systems with CORBA, various articles in journals and several publications at international conferences and workshops.

View original page

4 November 14:30Using memory errors to attack a virtual machine / Sudhakar Govindavajhala, Princeton University

We present an experimental study showing that soft memory errors can lead to serious security vulnerabilities in Java and .NET virtual machines, or in any system that relies on type-checking of untrusted programs as a protection mechanism. Our attack works by sending to the JVM for execution a Java program that is designed so that almost any memory error in its address space will allow it to take control of the JVM. All conventional Java and .NET virtual machines are vulnerable to this attack. The technique of the attack is broadly applicable against other language-based security schemes such as proof-carrying code.

We measured the attack on two commercial Java Virtual Machines: Sun's and IBM's. We show that a single-bit error in the Java program's data space can be exploited to execute arbitrary code with a probability of about 70%, and multiple-bit errors with a lower probability.

Our attack is particularly relevant against smart cards or tamper-resistant computers, where the user has physical access (to the outside of the computer) and can use various means to induce faults; we have successfully used heat. Fortunately, there are some straightforward defenses against this attack.

This presentation may include a live demonstration of our attack.

paper

View original page

7 October 16:15Hardware Security Appliances (HSA) / Simon Shiu, HP Labs, Bristol

Typically HSM's protect cryptographic keys and algorithms and have a low level (cryptographic) API. Overall security is then dependent on the accessibility of the API. A simplistic way to improve this situation is to allow generic applications to run within a secure boundary. However the complexity and interfaces of most applications mean that merely running them on secure hardware will not provide good security.

The Hardware Security Appliance (HSA) research is exploring ways to find the right model/balance of using secure hardware to achieve better system security. The HSA concept is to encapsulate simple security services that bind security functions such as decryption with authorisation and authentication. Such hardware secured services provide a functional root of trust that can be placed within the context of a wider IT solution. Running a security service within a secure hardware device with limited functional and management APIs allows suprisingly rich policies to be tightly bound to the ways cryptographic keys are used. The HSA has an RSA identity to allow remote configuration of policies – hence creating a separation of control from local system administrators.

The talk will include examples of HSA services that highlights the main aspects of the approach and (hopefully) show how "thinking in an HSA like way" leads to different kinds of security and trust solutions.

View original page

29 July 16:15Open APIs for embedded security / Carl A. Gunter, University of Pennsylvania

Embedded computer control is increasingly common in appliances, vehicles, communication devices, medical instruments, and many other systems. Some embedded computer systems enable users to obtain their own programs from parties other than the maker of the device. For instance, PDAs and some cell phones offer an open application programming interface that enables users to better customize devices to their needs and support an industry of independent software vendors. This kind of flexibility will be more difficult for other kinds of embedded devices where safety and security are a greater risk. This talk discusses some of the challenges and architectural options for open APIs for embedded systems. These issues are illustrated through an approach to implementing secure programmable payment cards based on Java Cards. This work is based on efforts of the OpEm Project at Penn.

View original page

30 June 16:15Rethinking computer architecture for cyber security / Ruby Lee, Electrical Engineering Dept., Princeton University

Cyber security provides assurances and safeguards for cyberspace interactions and services. These are built upon hardware and software technology for computing, communications and storage. In the past half century, design goals have focussed mainly on improving performance, cost and power in hardware, and on improving functionality, versatility and ease-of-use in software. Approaches to cyber security have focused on reactive measures, perimeter security and software implementations. In contrast, we propose a proactive approach to cyber security, where every component, hardware, software or networking, has secure or trustworthy operation as a primary design goal. We ask what computer architecture might look like, if cyber security is a primary design goal, rather than added on as an after-thought. What is a minimalist set of architectural components for a security-aware processor? We give some examples of faster ciphers with novel permutation instructions, defensive design for mitigating DDoS attacks, and virtual secure co-processing.

View original page

10 June 16:15Major incident planning in an NHS Acute Hospital / Marek Isalski, South Manchester University Hospitals NHS Trust

Planning for emergency incidents has become very topical with the focus on "Post-September Eleventh Threats". This seminar will give an overview of how an Acute Hospital's planning fits in with other emergency services in managing a major incident and will pay particular attention to how the skills developed by security researchers and analysts are applicable in the role of "Emergency Planning Officer".

Speaker:

After graduating in Computer Science from Cambridge and working as a security programmer, Marek Isalski was appointed as Data Security Manager at South Manchester University Hospitals NHS Trust. He is the lead for Data Protection, Freedom of Information and information confidentiality/security at the Trust, and his responsibilities also include business continuity planning. Together with James Bell he co-ordinates the Major Incident Planning Team currently reassessing emergency planning primarily for the Wythenshawe site, the hospital closest to Manchester Airport.

View original page

22 May 16:30Honeycomb and the current state of honeypot technology / Christian Kreibich, Computer Lab

View original page

20 May 16:15Why data protection laws don't work (and what may need to be done about that) / Douwe Korff, London Metropolitan University

Douwe Korff will explain what data protection is (and what it isn't, i.e. not data security and not privacy), what its basic principles are – and why the laws don't work. He will show that the legal rules are predicated on assumptions which do not hold, and that enforcement is haphazard and negotiable. But he will also show how something like data protection is going to be crucial if the individual is to be protected against major (public and private) institutions and interests. And he will then try and discuss with the audience how the problems can be overcome.

Speaker:

Douwe Korff is a Dutch human rights lawyer and data protection expert. Now a professor of international law at London Metropolitan University, he has worked in both (overlapping) fields for Amnesty International, the Council of Europe and the EU Commission as well as the direct marketing industry.

View original page

7 May 16:15The mother of all surveillance schemes / Simon Davies, London School of Economics

The UK government has launched two consultations on retention of communications data and access to data. The government's aim appears to be the creation of a comprehensive mandatory regime of data storage that will cover all aspects of location and communication traffic on almost the entire population. These proposals follow a string of initiatives designed to shift the privacy default in favour of law enforcement, revenue and national security. In this talk I will outline the threats and benefits of universal surveillance of communications, and place this assessment into the broader context of the declining state of privacy in Britain. Simon Davies is Director of Privacy International.

View original page

6 May 16:15Anonymity in practice / Len Sassaman, The Mixmaster Project

There have been many designs proposed for network anonymity systems, but only a few have seen noticeable adoption. This is due in part to the fact that there are some difficult problems to solve when designing an anonymity system, and often theses problems are "practical" in nature, and not anticipated at the design stage. This seminar will discuss the ways in which anonymity systems are being deployed, what their uses are, and where they meet or fail to meet their intended purposes. Key design points, implementation and deployment pitfalls, abuse concerns, and various attacks on existing systems will be covered.

Speaker:

Len Sassaman is a communication security consultant specializing in Internet privacy and anonymity technologies. Len has been a strong defender of personal rights through technology. As a volunteer, he has lent his expertise to human rights organizations, victim support groups, and civil liberties organizations.

Len is an anonymous remailer operator, and is currently project manager for Mixmaster, the most advanced remailer software available. Previously, he was a software engineer for PGP Security, the provider of the world's best known personal cryptography software. A returning Black Hat speaker, Len is also a frequent contributor to online discussions of electronic privacy issues, and has contributed to the development of free software privacy utilities.

View original page

1 May 17:30Total Information Awareness / Phil Zimmermann

The human population is not doubling every 18 months, but the ability of computers to keep track of us is. The blind force of Moore's law has been accelerated by policy since 9/11. What are the feasible, and reasonable, responses to this?

Speaker:

Phil Zimmermann was the creator of PGP, the world's most popular email encryption software.

View original pageView slides/notes

29 April 16:15Bypass of locks / Marc Weber Tobias, Investigative Law Offices

The talk will provide a summary of the security problems associated with bypass of locks and safes, and a primer of the basic locking mechanisms. A description of the process of breaking three different locks that are utilized in the hotel industry worldwide will also be provided. These case examples will demonstrate vulnerabilities and lack of proper security engineering by the manufacturers.

Speaker:

Marc Weber Tobias is an Investigative Attorney and polygraph examiner in the United States. He has written five law enforcement textbooks dealing with criminal law, security, and communications. Marc Tobias was employed for several years by the Office of Attorney General, State of South Dakota, as the Chief of the Organized Crime Unit. As such, he directed felony investigations involving frauds as well as violent crimes.

Mr. Tobias is the author of the 1400 page textbook and multimedia collection Locks, Safes, and Security: An International Police Reference. He consults on lock security and his law firm handles investigations for government and private clients.

slides (Powerpoint, 25 MB)

View original page

07 April 16:15An alternative approach for verifiable secret sharing / Kamil Kulesza, Polish Academy of Sciences

The speaker will present in the first part of the talk some ongoing research. The second part is about a result first presented with Zbigniew Kotulski and Josef Pieprzykat at ESORICS 2002 in Zurich about verifiable secret sharing. The approach there works for any underlying secret sharing scheme. It is based on the concept of verification sets of participants, related to authorized set of participants. The participants interact (no third party involved) in order to check validity of their shares before they are pooled for secret recovery. Verification efficiency does not depend on the number of faulty participants.

View original page

24 March 16:15Understanding security dependencies / David LeBlanc, Microsoft

[David will present the talk "Writing secure code" that was originally announced for this slot on Wednesday in St John's College instead. He coauthored a book of the same title (CL library: K.6 39)]

View original page

18 March 16:15m-o-o-t – Securing the everyday computer, and protecting it against governments / Peter Fairbrother

Mandatory decryption and/or key access for law enforcement and other purposes is being considered by Governments as a viable alternative to key escrow.

m-o-o-t responds to this threat, which we at m-o-o-t consider useless against the well-informed, an invasion of privacy, and potentially self-incriminatory.

The implementation and integration of some techniques to make cyphertext unavailable to LEA's, to make keys unavailable to the user, and to hide files, will be covered in some detail.

These are included in the m-o-o-t CD, which boots and runs on most everyday computers – the internal hard drive need not be involved. Security measures against some non-cryptanalytic attacks are included, and functionality is optimised for the novice.

The talk will also mention some anonymity and deniability techniques which we are working on, the future of m-o-o-t at a time when the eventual implementation of RIPA Pt.3 is becoming uncertain, and some unanticipated uses for m-o-o-t.

View original page

12 March 16:15The PERMIS X.509 role based privilege management infrastructure / David Chadwick, University of Salford

Wednesday Seminar, LT1

This talk will describe a policy driven role based access control system developed under the EC PERMIS project. The user's roles, and the policy are stored in X.509 Attribute Certificates. The policy, written in XML, describes who is trusted to allocate roles to users, and what permissions each role has. The DTD has been published at XML.org. Access control decisions are made by an Access Control Decision Function consisting of just three Java methods and a constructor. The decision is made according to the requested mode of access, the user's trusted roles and the policy. We also have a tool, the Privilege Allocator, that makes ACs and stores them in an LDAP directory.

View original page

11 March 16:15Is information the new weapon of mass destruction? / Stephane Koch, Ecole de Guerre Economique & Internet Society Geneva

After the events of 11 September 2001, the past year has demonstrated how controlling publicly available information is of strategic advantage, both economically and politically. Governments find the ability to anticipate public opinion indispensable, as this permits to disseminate "appropriate" elements of information on which the public will base its decisions.

Army psychological operations units ("psy-ops") represent this new era, in which wars are won primarily in public opinion. On this new theater of operations, the different information providers and actors in the world of communication are themselves tools of influence and manipulation – willingly or unwillingly. Taking into account the speed at which data is exchanged today and the reductions in information processing time, it becomes more and more difficult to find the guide marks necessary for an independent opinion.

View original page

20 February 14:30Cryptology and physical security: rights amplification in locks / Matt Blaze, AT&T Labs Research

Computer security and cryptology takes much of its basic philosophy and language from the world of mechanical locks, and yet we often ignore the possibility that physical security systems might suffer from the same kinds of attacks that plague computers and networks. This talk examines mechanical locks from a computer scientist's viewpoint. We describe attacks for amplifying rights in mechanical pin tumbler locks. Given access to a single master-keyed lock and its associated change key, a procedure is given that allows discovery and creation of a working master key for the system. No special skill or equipment, beyond a small number of blank keys and a metal file, is required, and the attacker need engage in no suspicious behavior at the lock's location. We end with future directions for research in this area and the suggestion that mechanical locks are worthy objects of our attention and scrutiny.

more info

View original page

19 February 16:15Quantum computation – from theory to experiments / Artur Ekert, DAMTP, University of Cambridge

Wednesday Seminar, LT1

The theory of computation, including modern cryptography, was laid down almost seventy years ago, was implemented within a decade, became commercial within another decade, and dominated the world's economy half a century later. Quantum information technology is a fundamentally new way of harnessing nature. It is too early to say how important a way this will eventually be, but we can reasonably speculate about its impact both on computation and data security. I will review the basic concepts of quantum information science and describe experimental techniques which aim to give data processing devices new functionality.

View original page

18 February 16:15The cryptographic role of the cleaning lady / Robert Morris, National Security Agency (retired)

In recent years, loss of valuable information has been due to surprisingly low tech attacks.

By the cleaning lady, I mean some person or entity that you believe could not possibly be part of your security or cryptographic system. I leave it to the reader to identify his or her own cleaning ladies in the remainder of this talk and in real life.

It is my understanding that all major countries employ cleaning ladies in this capacity.

Would the listener please think hard about 'trusted third parties' and 'woman in the middle' attacks.

View original page

18 February 14:30Fighting spam: moderately hard memory-bound computations
/ Mike Burrows, Microsoft Research

NetOS Seminar, LT2

View original page

04 February 16:15Administrative Scope: a foundation for role-based administrative models / Jason Crampton, University of London, Royal Holloway

The basic components of role-based access control are well understood and widely accepted. The use of RBAC principles to manage RBAC systems has been less widely studied although some advances have been made. In particular, the ARBAC97 model makes an important contribution to the understanding and modeling of administration in role-based access control. However, there are several features of the model which we believe could be improved. We introduce the concept of administrative scope in a role hierarchy and show how this can be used to control updates to the hierarchy. We then incrementally develop a model for administering the role hierarchy and compare it to the RRA97 sub-model of ARBAC97. We conclude that our model offers significant advantages over RRA97.

paper

View original page

17 January 16:00Making NSA Security Enhanced Linux easy to use and manage / Russell Coker

MAC based security systems have not achieved much popularity because of both actually and perceived difficulties of use.

I will describe my work in adding SE Linux support to the Debian distribution including packaging policy files, and supporting live upgrades of software in a secure fashion. Given a choice between security and manageability most organizations will not choose security. Given a choice between security and ease of use most users will not choose security. I aim to make SE Linux easy enough for desktop users and manageable enough for commercial users.

Finally there are some issues regarding SE Linux management that have not been addressed adequately (IMHO). I will discuss these with the audience and I will be very interested in any suggestions for ways to approach these problems.

slides
followup

2002

View original page

9 December 11:00Privacy lost / Jonathan Smith, University of Pennsylvania

"...your eyes shall be opened, and ye shall be as gods, knowing good and evil"
— Satan, Genesis III:5

And the eyes of them both were opened, and they knew that they were naked
— Genesis III:7

The increasing interconnection of data sources has led to growing fears that the "end of privacy" (at least as we know it today) is near. This may be the most undesirable long-term outcome of the continuing information revolution.

Since data today are largely stored data, and further, are often collected in a user-controllable manner (e.g., by data entry from a keyboard), various privacy techniques and technologies can be applied. However, in the very near future, ubiquitous low-cost sensors will be introduced into our information networks, and eventually operated collectively, with interesting and perhaps unsettling consequences.

This talk will attempt to expose a subset of the issues and to stimulate thinking on the technologies and their implications. I will close with some speculation on how we, as engineers, might keep society's options open.

View original page

26 November 16:15Anonymity and e-coting without 'cryptography' / Ofer Margoninsky, Hebrew University of Jerusalem

AMPC is a new, encryption free anonymizing network that is efficient to use, and does not require the use of conventional cryptography by the users of the network. The AMPC (Anonymous MultiParty Computation) method uses a variation of Chaum's mixes that utilizes value-splitting to hide inputs, and is secure as long as less then a square root of the servers in the network are compromised. On top of AMPC we have built a new e-Voting protocol, which also does not require the users to use any conventional cryptography, thus 'freeing' the users from the need to rely on the security and integrity of the workstations they use to perform the actual voting. The protocol also provides the voter with a receipt, that ensures the voter that his vote was actually received by the tallier. This new e-Voting protocol uses a new weak signatures building block ('enhanced check vectors') as well as AMPC.

related papers

View original page

19 November 16:15Towards the human firewall &ndashstandards, pitfalls and suggestions / Rossouw von Solms, Port Elizabeth Technikon, South Africa

Information has grown to become the most important asset to most organizations today. To effectively secure these assets, a set of security controls is normally introduced. These controls can be physical, technical or operational of nature. Operational controls are those controls that are executed by employees or users of information, like locking your office door or not writing your password word. Thus, the behaviour of the employees or users are influenced by the operational controls defined. These operational controls are normally dictated through company policies and procedures, which are derived from and based on various standards and frameworks.

The major problem experienced in many organizations today are that the users are not aware of or do not adhere to these policies and procedures. Therefore, educating the users to behave according to the company's information security policies and procedures will ensure that an information security culture will be created in the organization. This security culture will give rise to, what can be called, the human firewall. This human firewall should ensure that all users of information are fully educated as far as information security is concerned and their everyday behaviour, when working with company information, is in line with the prescribed policies and procedures.

This talk describes the role of policies, procedures, standards, frameworks, etc in creating an information security culture in an organization where the behaviour of the users creates a human firewall against information security threats.

View original page

12 November 16:15Model-checking cryptoprocessors(or: why I like the British Museum) / Mike Bond, Computer Laboratory

Design of security APIs is becoming as notoriously hard to get right as design of security protocols. This talk describes the first steps towards developing a formal tool to assist experts in the analysis of security APIs.

The speaker first describes the roots of this work in crypto protocol analysis, and explains the new challenges presented by API analysis. He describes basic approach to formalising APIs, and presents a new tool which can check a formal model of an API against specific properties, for instance: checking a financial API to see if any combination of up to 5 commands can reveal a customer's PIN.

The tool uses birthday attacks and a large helping of brute force to analyse a large subset of an APIs state space. Though the tool can never hope to explore more than a large subset of the API, the speaker believes that interesting attacks do lie within state spaces between 240 and 280 – an area as yet unexplored by existing tools.

View original page

6 November 16:15Smartcard Defence Technology / Simon Moore, Computer Laboratory

The mass adoption of embedded computing devices (mobile phones, PDAs, smartcards, etc) is moving us rapidly into the ubiquitous computing age. If these devices are to be a boon rather than a bane then robustness is critical. Security will be increasingly important, not only for traditional roles like payment mechanisms and access control, but also for peer to peer transactions and new business structures.

Smartcards are an early embodiment of consumer security devices. They present a harder target for the criminal underworld than their magnetic strip counterparts. However, for several years now it has been know that microprocessors can leak a lot of useful information through power and electromagnetic emissions. These emissions (often referred to as "side channels") are characteristic of conventional clocked digital circuit designs. Fault injection techniques have also been used to trick devices into fault modes which leak additional information.

As part of an EU funded project (G3Card) we have been collaborating with industrial and academic partners to develop technologies for the 3rd Generation of Smartcards. In Cambridge we have played both black hat and white hat roles so that we can evaluate what we have designed in much the same way that a good locksmith must also understand how to be a good lock pick. This lecture will review our design strategies, from concept to VLSI implementation. Results will be presented from formal verification of components to bench experiments on naked chips.

View original page

29 October 16:15Viruses – a nightmare waiting to happen? / Stuart Taylor, Sophos

This talk will present a brief history of viruses, how the problem has changed from 15 years ago to the current day with a look at just how large the problem really is in the light of the rapid technological change of the last few years. It will review current viruses and provide a look at what can be expected in the future.

View original page

18 October 16:00The electronic voting enigma: hard problems in computer science / Rebecca Mercuri, Bryn Mawr College

Although it might appear that modern technology should be able to provide secure, auditable, anonymous elections, this turns out to be a difficult problem for computer scientists. Vote collection and tabulation involves processes for system security, program provability, user authentication, and product reliability, all of which harbor inherent flaws. These matters are further compounded by sociological and legal technicalities – such as the prevention of vote-selling and protection from denial-of-service attacks. This talk will address these subjects from a computer science standpoint, focusing on those which are considered to be "hard" (the CS word for "presently unsolvable"). Although these computer systems can not achieve all desired election goals, suggestions will be made regarding design enhancements which, if implemented, could improve these devices to the point where they are almost as good as mechanical lever machines and hand-counted paper ballots.

Related:

View original page

15 October 17:00I know your PIN (PIN recovery attacks) / Jolyon Clulow, Prism

A number of efficient attacks against the typical financial API of tamper responding security modules will be presented. This allows the recovery of the PIN from an encrypted PIN block. These attacks succeed against the state of the art security modules of all major vendors, and are computationally trivial requiring between a few seconds and a couple of minutes. Some real world attack scenarios are also presented highlighting the potential for fraud.

dissertation, slides

View original page

1 October 16:15Verifiable democracy / Yvo Desmedt, Florida State University

Lecture Theatre 2, William Gates Building

The concept of digital signatures is supposed to replace handwritten ones. Verifiable Democracy is the virtual version of handwritten legislature. It seems that the concept of Threshold Signatures addresses this. (In threshold signatures the secret key is distributed so that only authorized subsets can combine their shares to form a signature. Any non-authorized subset gains no information about the signature.) However, a problem that occurs is that-even in the case of virtual legislature-lawmakers may be absent. In many democratic organizations the number of users vary temporally and so the meaning of what a majority is. The manner in which a legislature votes is similar to a threshold signature scheme, and the power to sign is similar to possessing shares to sign. The fact that members are absent implies the need for transfer of power to sign. Schemes for redistribution shares have been developed. However, these solutions require parties to delete their shares, which is often an unrealistic assumption. Here we provide a model for democratic bodies and solve the related problem of assuring an orderly and verifiable transfer of power as the size of the body varies. This presentation is based on joint work with Brian King and will be presented at eGOV (September 2–6).

View original page

17 September 16:15Laser radiation – a tool for integrated circuit examination and interference / Peter Skorobogatov, SPELS, Moscow

Lecture Theatre 2, William Gates Building

This talk presents research results on the effects of irradiating semiconductor devices (SD) and integrated circuits (IC) with lasers. We show that the adequate simulation of the phenomena occuring requires the joint numerical solution of both the optical equations as well as the fundamental semiconductor physics equations in a two-dimensional approximation. Simulations with our "DIODE-2D" software have shown that laser irradiation can be an effective tool for SD and IC investigation and influencing. It may be used to ionize separate components to define their reaction or change state. The numerical simulation helps to identify optimal laser-beam parameters, such as the wavelength, pulse width, location etc. Numerous examples presented will illustrate the capabilities of SD and IC laser irradiation.

View original page

17 September 15:00Exploiting EM emanations and using templates for sidechannel attacks / JR Rao, IBM Thomas J. Watson Research Center, NY

Lecture Theatre 2, William Gates Building

In the first part of this talk, I will present results of a systematic investigation of leakage of compromising information via electromagnetic (EM) emanations from CMOS based devices. This information leakage differs substantially from and is more powerful than leakage from other conventional side-channels such as timing and power. EM emanations are shown to consist of a multiplicity of compromising signals, each leaking somewhat different information. Our experimental results confirm that some of these signals could individually contain enough leakage to defeat countermeasures against other side- channels such as power. In the second part of this talk, I will present a new form of side channel attacks which we call template attacks. These attacks can break implementations and countermeasures whose security is dependent on the assumption that an adversary cannot obtain more than one or a limited number of side channel samples. They require that an adversary has access to an identical experimental device that he can program to his choosing. In contrast to previous approaches which viewed noise as a hindrance that had to be reduced or eliminated, our approach focuses on precisely modeling noise, and using this to fully extract information present in a single sample. I will present a case study where we use this approach to extract keys from an implementation of RC4.

View original page

3 September 16:15Physical one-way functions / Ravi Pappu, ThingMagic LLC

Lecture Theatre 2, William Gates Building

Modern cryptographic practice rests on the use of one-way functions, which are easy to evaluate but difficult to invert. Unfortunately, commonly used one-way functions are either based on unproven conjectures or have known vulnerabilities. We show that instead of relying on number theory, the mesoscopic physics of coherent transport through a disordered medium can be used to allocate and authenticate unique identifiers by physically reducing its microstructure to a fixed-length string of binary digits. These physical one-way functions (POWFs) are inexpensive to fabricate, prohibitively difficult to duplicate, admit no compact mathematical representation, and are intrinsically tamper-resistant. We provide a simple authentication protocol based on the enormous address space that is a principal characteristic of physical one-way functions.

A majority of this work was done while the speaker was at the MIT Media Laboratory.

View original page

18 August 15:00Verifiable secret redistribution / Chenxi Wang, Carnegie Mellon University

Lecture Theatre 2, William Gates Building

Threshold sharing schemes provide fundamental building blocks for secure distributed computation and the safeguarding of secrets. Since its invention, many enhancements to threshold secret sharing have been proposed. Proactive Secret Sharing, for example, provide enhanced protection by updating the shares periodically in a distributed fashion. Traditionally, PSS schemes retain the same set of shareholders and the same access structure across updates. A more general problem is the redistribution of shares between different (possibly disjoint sets of) shareholders and different access structures. We study this generalization and present a new protocol that performs verifiable secret redistribution between arbitrary shareholders and across arbitrary access structures. We also identify a vulnerability in the previous protocols that allows faulty shareholders to distribute invalid shares to new shareholders, and we prove the security of our scheme with an information-theoretic security proof.

Technical Report

View original pageView slides/notes

12 June 16:15Electromagnetic eavesdropping on computers / Markus Kuhn, Computer Laboratory

The traditional techniques for remote unauthorized access to private and confidential information – tapping communication links, code breaking, impersonation – become increasingly infeasible as the use of modern cryptographic protection techniques proliferates. Those in the business of obtaining information from other people's computers without consent – criminals and spies, intelligence agency and law enforcement technicians, private detectives, market researchers – are therefore increasingly looking for alternative eavesdropping techniques. One class of alternatives utilises those unintentional information leaks caused by the physical/analog underlying processes in computers and peripherals that can be sensed, amplified and decoded at a distance.

This talk provides an introduction, overview and demonstration of electromagnetic and optical passive eavesdropping techniques for personal computers, focusing in particular on video display units. It will present new techniques for eavesdropping liquid-crystal and cathode-ray tube displays and will discuss the information-security threat posed by these, along with simple new protective measures.

Slides

View original pageView slides/notes

11 June 14:15Digital identity & profile management – the right way / Stefan Brands, Credentica

Lecture Theatre 2, William Gates Building

Applications that involve the electronic transfer of credentials, profile data, and other sensitive information are quickly gaining momentum. Initiatives such as E-Government and Network Identity are attempts to facilitate information exchanges beyond the traditional confines of private networks. Today's prevalent methods for secure electronic authentication rely either on Kerberos-style authentication or on PKI based on digital identity certificates, both of which were invented a quarter of a century ago, at the dawn of modern cryptography. In particular, they were designed to secure primarily non-open organizational environments, such as enterprise intranets and inter-government communication. Within the context of today's emerging open information infrastructures, however, symmetric authentication and digital identity certificates do at best a mediocre job of protecting security, introduce a host of performance problems, and have devastating consequences for privacy. Amongst others, they fundamentally do not offer any of the following: software-only protection against lending of access rights; role-based access; the ability to disclose the minimal information needed to a verifier; the ability of verifiers to hide competitive data from online status validators; limited-use instances of certified information; non-repudiation even in the presence of malicious central parties; and, reverse (or negative) authentication. As a result, they expose organizations to potentially unlimited liability, lead to consumer fear, and stifle the adoption of new systems. This presentation will show a much better way of doing authentication and access control in Digital Identity and Profile Management systems, based on scientific advancements in electronic authentication made over the past 25 years.

ABOUT THE AUTHOR: Dr. Stefan Brands is one of leading cryptographic experts on the subject of electronic authentication. His book Rethinking Public Key Infrastructures and Digital Certificates: Building in Privacy has been widely acclaimed by prominent privacy advocates, security experts, and legal experts, and its subject matter is taught at universities around the world. Dr. Brands is an adjunct professor at McGill's School of Computer Science in Montreal, and is the founder of Credentica. Incorporated in January 2002, Credentica's mission is to provide superior software solutions for transaction systems that involve digital identity and profile management.

View original page

28 May 16:15Isn't Kerberos boring? / Paul Leach, Microsoft

Lecture Theatre 2, William Gates Building

Kerberos is old technology — started over 15 years ago, and based on fundamentals first published almost 25 years ago. It first showed up in Windows as part of Windows 2000, and continues to be its central authentication technology. What could be interesting about it today?

View original pageView slides/notes

21 May 16:15Emerging problems in digital evidence / Peter Sommer, CSRC/LSE

Lecture Theatre 2, William Gates Building

Computer Forensics is now over a decade old. While disk forensics operates at very high standards of evidence preservation and analysis, other forms of digital evidence do not. What standards should we expect and apply to the output of mainframe computers, or from complex systems, or to logs of intercepted network traffic? The search for answers requires us to look at the fundamentals of "forensic science" and how far its aims may be different from those of conventional scientific activity. "Proof" in the court-room is quite different from "scientific" proof; and engineering notions of "reliability" different again from "legal" reliability. We also need to understand some of the quirks of admissibility as well as the practicalities of what happens in the run up to a trial as well as in a trial itself.

Slides (ppt, pdf)

View original pageView slides/notes

14 May 16:15An advanced beginners guide to frauds and scams and some countermeasures / Jack Lang, Computer Laboratory

Lecture Theatre 2, William Gates Building

Any security system needs to consider likely threats. This seminar is a brief introduction and survey of frauds and scams, with some remarks on simple, and often non-technical, common sense countermeasures that are so often neglected.

View original page

7 May 16:15Internet voting: fool's gold? / Jason Kitcat

Lecture Theatre 2, William Gates Building

Internet Voting has been hailed as a solution to the increasing malaise we are experiencing in politics and democratic engagement, especially among 'young people'. I'll be exploring:

  • Why Internet Voting is unlikely to improve turnout.
  • Why so many companies are trying to offer Internet Voting services and what sorts of security they're offering.
  • How GNU.FREE differs from commercial Internet Voting solutions.
  • Is secure and private Internet voting possible?

Finally I'll run through some issues of security perception versus reality and why using Free Software can help non-technical people trust technology.

View original pageView slides/notes

30 April 16:15MIST: a randomised exponentiation algorithm for reducing side channel leakage / Colin Walter

Lecture Theatre 2, William Gates Building

Additional notes: View slides/notes

Recent attacks using differential power analysis (DPA) have shown how good equipment and poor implementation might be applied to break a single use of RSA on a smart card. The attacks are based on recognising the re-use of operands in the standard square-and-multiply, m-ary or sliding windows exponentiation schemes. A new algorithm is presented which avoids such operand re-use and consequently provides much greater resistance to DPA. It is based on generating random addition chains. Unlike the easier process of generating addition/subtraction chains (which have been applied to ECC), the algorithm does not require the computation of an inverse, and so is also applicable to RSA.

The talk will concentrate on two aspects of the algorithm, namely its efficiency and its security against side channel leakage. The former establishes performance akin to that of 4-ary exponentiation. The latter will assume the attacker can distinguish between squares and multiplies, and perhaps recognise re-use of operands. Under such attacks, it still appears to be computationally infeasible to recover the secret exponent.

handout, slides

View original page

12 March 16:15Middleware security - current research and future work / Ulrich Lang, Computer Laboratory/ObjectSecurity Ltd.

Lecture Theatre 2, William Gates Building

This talk introduces a new middleware security model with access policies based on "resource descriptors". These are necessary because the available cryptographic identities only represent software entities at the middleware layer, but not individual application-layer clients or targets. As a result, additional descriptors are needed to express fine-grained policies. Useful descriptors need to fulfil properties such as uniqueness and persistency. We obtain such descriptors through a mapping process from instance information to resource descriptors.

As part of the EU funded research project Component Based Open Source Architecture for Distributed Telecom Applications (COACH), we plan to implement and evaluate component based distributed systems (CORBA components and Enterprise Java Beans) for the telecommunications domain. This includes the design and implementation of a security architecture for these new requirements and provides opportunities for interested students and researchers to join the project.

View original page

19 February 16:15The challenges of international cybercrime investigations / Nigel Jones, National High-Tech Crime Training Centre

Lecture Theatre 2, William Gates Building

The use of technology by criminals is impacting at an unprecedented scale the ability of the police to fulfill their role in society. Almost any crime may now have a digital aspect, from the very simple distribution of illegal material to murder.

Nigel Jones has recently retired from the Kent Police Computer Crime Unit and is currently developing training programmes for cybercrime investigators and forensic computer analysts. He has been closely involved with the topic at a national level within ACPO and at an international level within European Commission high-tech crime discussions and those in the Lyon Group of the G8.

He will talk about what constitutes cybercrime and present some real life cases to show the type of difficulties that investigators encounter, including issues such as disclosure, forensic examination of seized computers, and the practical effects of the Human Rights Act on law enforcement's ability to conduct investigations. He will also discuss the issues of data retention and preservation, along with the challenges posed to law enforcement by EU data protection legislation.

The talk aims to show how working police officers are (sometimes) managing to gather evidence, despite all the challenges they face.

View original page

12 February 16:15Location privacy in the next generation internet / Alberto Escudero-Pascual, Royal Institute of Technology, Stockholm

Lecture Theatre 2, William Gates Building

The Internet was not engineered to preserve privacy and is rapidly becoming "the" communication network. European Union policies on data protection demand a better understanding of the tradeoffs between the benefits and privacy risks of new Internet technology.

Maintaining location or traffic information confidential like the transmitted data are key provisions of the new European regulatory framework for electronic communications infrastructure. The EU aims to adapt and update the existing Data Protection Directive to take into account new technologies and to empower users to control their personal information. However, it is not well understood how this policy and the underlying Internet technology can be brought into alignment. For example, the current IPv6 method of automatic device configuration results in a readily observable and recognizable identifier, in spite of a roaming user.

This talk will present a number of privacy threats in the next generation Internet and the ongoing efforts in the research community to handle them, focusing on RFC3041 and location privacy in (hierarchical) MobileIPv6.

View original page

29 January 16:15The psychology of identification / Graham Pike, Faculty of Social Sciences, The Open University

Lecture Theatre 2, William Gates Building

Humans have an extraordinary ability to recognise faces and can do so despite changes in viewing angle, lighting, age and hairstyle. This should make human operators very successful at detecting the fraudulent use of photo-id and -credit cards, at recognising the perpetrator of a crime and at matching the face of a suspect to video surveillance footage.

However, psychological research has shown that we tend to make very inaccurate eyewitnesses and, more surprisingly, cannot even perform the simple matching tasks involved with checking photo-cards and identifying suspects from CCTV footage. This has led to the conclusion that we are good at processing 'familiar' faces and poor at processing 'unfamiliar' faces.

The current talk looks at the results of research that has examined face identification in a forensic setting and compares the ability of human operators to the specifications set-down for computerised systems.

View original page

15 January 16:15Digital signatures - experiences and solutions regarding their use / Andreas Bertsch, SIZ - German Savings Banks IT Center

Seminar Room 3 (FW26), William Gates Building

Digital signatures are a basic technology for secure e-business, but only if the following issues are addressed, so that relying parties can trust in digitally signed statements.

One problem area is the validation of digital signatures. It cannot be guaranteed that the result is independent of the time of checking. Similarly, it is not clear whether the validity of digital signatures can be checked at any future time. Moreover, the delivery risks of digitally signed messages are not distributed according to the responsibilities of sender and recipient.

For these reasons, alternative and more comprehensive solutions are necessary. One area is to support that declarations of intent become binding at a point in time that is fair towards both the signer and the verifier.

This talk is based on problems and experiences with digital signatures analysed in the context of the German Digital Signature Act and Ordinance. It should be interesting to discuss some of these proposals in a European context.

Book

2001

View original page

13 November 15:00Unlimited information -- opportunity or threat? / Paul Whitehouse, Chief Constable of Sussex 1993-2001

Seminar Room 3 (FW26), William Gates Building

As ever more people are connected to the Web so they have access to unlimited information. Is this a safeguard against the emergence of tyrants? Or a means by which democracies can be destroyed? How is the accuracy of information to be verified? How can the undoubted benefits of such widespread availability of information be prevented from serving as an equally effective platform for the criminally minded? Should we be overly concerned about this? How do we ensure that the information that is required gets to the right people at the right time, and is not buried in a mass of junk mail? The continually accelerating pace of change makes it imperative to set out the right principles on which to make decisions on these important questions as soon as possible.

View original page

30 October 16:15Advanced techniques for rapid localization of ic defects / Daniel L. Barton, Sandia National Labs

Seminar Room 3 (FW26), William Gates Building

In this talk we will describe the evolution of a suite of advanced failure analysis techniques used for rapid fault localization on integrated circuits. These techniques have evolved from the basic electron-beam induced current method from electron microscopy. Clever beam energy control lead to the development of the resistive contrast imaging (RCI) technique. RCI proved very useful for evaluating the continuity of metal and poly interconnect layers. RCI was limited in that it provided information about all conductors; both good and bad. The need for rapid fault localization methods that return information from defective areas only lead to further technique development. Modifications to the bias and amplification setup used for RCI lead to the charge induced voltage alteration (CIVA) and the low beam energy, LECIVA, techniques. Like RCI, CIVA and LECIVA rely on an electron beam to stimulate the sample. Unlike RCI, they produce images by monitoring voltage changes across a constant current supply. This modification allows these techniques to produce images with content from the defective regions on integrated circuits only. From these electron beam-based techniques, the optical beam equivalent, LIVA or light induced voltage alteration technique was developed for scanning laser microscope use. LIVA differed from it's electron beam counterparts only in the stimulus, i.e. the use of a scanned laser beam. LIVA relies on the generation of electron-hole pairs and requires the use of wavelengths less than 1100 nm. LIVA produces images similar to CIVA and LECIVA except that the conductor fan-out network is not visible, only diffusions connected to open conductors appear in the images. The thermally induced voltage alteration (TIVA) and Seebeck effect imaging (SEI) techniques solve this problem by using longer wavelength lasers where electron-hole pairs are not generated. TIVA and SEI use a thermal stimulus with the same basic bias method used in the original CIVA technique. TIVA, LIVA, and SEI have the ability to be used from the front or backside of the die. We will describe the physics behind each technique and demonstrate their applications through examples.

View original page

23 October 16:15Verification of set: the purchase phase / Larry Paulson, Computer Laboratory

Seminar Room 3 (FW26), William Gates Building

Past work on protocol verification has largely focused on simple protocols from the academic world. SET is a huge protocol devised by Visa and Mastercard for Internet shopping. It aims to protect both cardholders and merchants from fraud. Protocol participants must first register with their bank, which (after making suitable checks) will provide them with electronic credentials. Customers don't give their credit card numbers directly, but instead give these credentials to the merchant to prove their honesty. The merchant presents similar credentials to the customer. For payment, the customer's account details are passed to the merchant's bank, but not to the merchant himself.

The initial registration phase could in principle be simple. Unfortunately, complex mechanisms (e.g. digital envelopes) and unnecessary encryption complicate the proofs. The talk gives a very high-level overview of the SET protocol and then shows a few details of the proofs of its registration and payment phases.

View original page

9 October 16:15Electronic commerce -- some security aspects / Peter Landrock, Aarhus University and Cryptomathic

Seminar Room 2 (FW09), William Gates Building

Electronic Commerce is about Commerce. "Electronic" is only to speed up matters and thus increasing the profit. But to some (in fact, most) security experts, the focus is on "Electronic" rather than "Commerce", which is only an excuse to build "very secure" systems. As a result, most systems available today are too cumbersome (e.g. SET), and if we are not careful, we may never find an appropriate route forward. In the talk, we will exhibit a number of bad designs, including PGP, and explain how we think EC should be implemented.

View original page

12 June 16:15Information security and economics michelmas term 2001 starting october 2001, the security seminar series takes place in the new computer laboratory building in west cambridge. / Ross Anderson, University of Cambridge

Room TP4, Computer Laboratory

Buggy software, buggy networks and buggy people make even the most carefully designed systems and processes vulnerable. Yet many of the problems can be explained more clearly and convincingly using the language of microeconomics: network externalities, asymmetric information, moral hazard, adverse selection, liability dumping and the tragedy of the commons. Information security is about power; while at the technical level it is about controlling who may use which resource and how, while at the level of business strategy it is increasingly about raising barriers to trade, segmenting markets and differentiating products. Often insecurity is welcome; for example, it may foster economic growth by making monopolies harder to defend.

View original page

5 June 16:15A low-cost hardware birthday attack on des / Mike Bond, Richard Clayton, University of Cambridge

Room TP4, Computer Laboratory

A brute force attack on DES has been proven to be within reach of corporations and organised crime since the EFF created the Descracker machine in 1998. In this talk we aim to show just how high up the brute force ladder a single individual, of modest means, can climb.

View original page

29 May 16:15Malice within communications technology / Richard Lines, Stork Ltd

Room TP4, Computer Laboratory

Technology deployed in the mobile telecoms industry in recent years has been designed to protect networks and customers from fraud risk. The truth is that fraud has not been defeated by technology, quite the reverse. Those measures specifically designed to thwart criminals have often been used to perpetrate fraud. The reason for this is a lack of understanding of the nature of fraud and those who commit it. Internal fraud is the greatest risk that any commercial enterprise faces and those based upon technology are the most vulnerable of all. This talk will examine some of the types of internal fraud which are commonly experienced and attempt to explain them using some brief examples from the speaker's own experience as well as suggesting ways in which some of the wrongs may be righted.

View original page

22 May 16:15Security for the mobile internet / Michael Roe, Microsoft Research

Room TP4, Computer Laboratory

In version 4 of the Internet protocol, an IP "address" was used to identify both a computer and the point at which that computer was connected to the network. This is acceptable when the computer's point of connection never changes, but might become a problem when computers are mobile. The IETF is proposing a change to the Internet Protocol which allows a host's address to change over time. The last draft of the proposal was rejected as unacceptable because it introduced too many security problems. We present a cryptographic protocol which is intended to reduce these security problems to a manageable level.

View original page

16 May 16:15Unconditional security in cryptography: was shannon too pessimistic? / Ivan Damgaard, University of Aarhus

Babbage Lecture Theatre, Computer Laboratory

Unconditionally secure communication means that even an infinitely powerful adversary cannot break the confidentiality nor the authenticity of the system. Classical results by Shannon dating back some 50 years seem to imply that unconditionally secure solutions are doomed to being impractical, if not impossible. However, in recent years, new research has shown that these results were based on rather pessimistic assumptions on the amount of information available to an adversary. It turns out that in many practical scenarios, these assumptions are not satisfied, e.g., when communication is noisy, in large networks where not all nodes can be hacked into, or when quantum communication is used. In all these settings, unconditional particular emphasis on quantum communication.

View original page

1 May 16:15Sequential tracing and its applications / Reihaneh Safavi-Naini, University of Wollongong

Room TP4, Computer Laboratory

In a pay-TV broadcast, an authorised user may decrypt the content and re-broadcast it. In Crypto 99, Fiat and Tassa proposed dynamic tracing schemes that can trace a group of colluders who attempt to re-broadcast the content. We show an attack on their scheme and propose a new tracing scheme, called sequential tracing scheme, that can capture all colluders and minimises real-time computation. We show application of this scheme to fingerprinting digital content.

View original page

11 April 16:15Information system security casino style / Jim Litchko, Litchko and Associates

Room TP4, Computer Laboratory

How much difference is there between gaming cheats and hackers? Not much, so why should the methods of protection and detection differ? This presentation provides a practitioner's review of how cheating in casinos and attacking information systems are similar. Using past posting, cool decks, chip cups, palming, card counting and mini-cam techniques, the presenter will illustrate how hackers attack systems using back-orifice, Trojan horses, shoulder surfing, social engineering, and lead referral methods. Finally, the presenter will explain how time-proven casino protection and detection techniques reduce the risk in casinos, and how similar techniques can be used to in providing effective information systems security. Additionally, he will talk about new knowledge-base and device agent technologies are being used to improve the central management of enterprise security devices.

View original page

6 March 16:15Embedding attacks on clock-controlled sequence generators / Bill Chambers, Kings College London

Room TP4, Computer Laboratory

I shall describe a number of attacks proposed recently on simple binary clock-controlled sequence generators, where one linear-feedback shift register determines the clocking of another shift register which produces the output. (The connection polynomials are assumed known.) In particular I shall consider the step[1..D] generator, the shrinking generator, and the closely related alternating-step generator. The basic idea is to find out where and with what frequency or probability the output binary sequence can be embedded in the sequence produced by the clock-controlled shift register. After describing methods for finding the most likely places for the embedding, I then examine ways of finding 'a posteriori' probabilities for the bits in the clocking sequence, and hence making possible fast correlation attacks on the control shift register.

View original page

27 February 16:15Cryptographic protocol analysis via strand spaces / Joshua Guttman, the Mitre Corporation

Room TP4, Computer Laboratory

Strand spaces are a Dolev-Yao style model of cryptographic protocol execution. They are intended to retain the minimal information compatible with the goal of providing reliable proofs of authentication and secrecy properties where they hold, and counterexamples where they do not. Strand spaces have been used as the basis for numerous results, by our group and others:

View original page

20 February 16:15Ponder: a language for specifying security and management policies for distributed systems / Morris Sloman and Emil Lupu, Imperial College, London

Room TP4, Computer Laboratory

This seminar describes Ponder - a new declarative, object-oriented language for specifying policies for security and management of distributed systems. The language includes constructs for authorisation policies defining permitted actions; event triggered obligation policies specifying actions to be performed by manager agents; refrain policies specifying actions that subjects must refrain from performing; delegation policies defining what authorisations can be delegated and to whom. Filtered actions extend authorisations to define transformation of input or output parameters. Constraints specify limitations on the applicability of policies based on time or object state. Roles group the policies relating to a position in an organisation. A management structure defines a configuration of role instances as well as the relationship between roles. These concepts can be used to model roles, rights and duties relating to organisational patterns which occur in many large enterprises.

View original page

13 February 16:15Attacks on cryptoprocessor transaction sets / Mike Bond, University of Cambridge

Room TP4, Computer Laboratory

Attacks are presented on the IBM 4758 CCA (the first ever security module to have achieved all round FIPS140-1 Level 4 certification) and the Visa Security Module. Two new attack principles are demonstrated. Related key attacks use known or chosen differences between two cryptographic keys. Data protected with one key can then be abused by manipulation using the other key. Meet in the middle attacks work by generating a large number of unknown keys of the same type, thus reducing the key space that must be searched to discover the value of one of the keys in the type. Design heuristics are presented to avoid these attacks and other common errors.

View original page

6 February 16:15Low temperature data remanence in static ram / Sergei Skorobogatov, University of Cambridge

Room TP4, Computer Laboratory

Security processors typically store secret key material in static RAM, from which power is removed if the device is tampered with. It is commonly believed that, at temperatures below -20C, the contents of SRAM can be `frozen'; therefore, many devices treat temperatures below this threshold as tampering events. We have done some experiments to establish the temperature dependency of data retention time in modern SRAM devices. Our experiments show that the conventional wisdom no longer holds.

View original page

30 January 16:15Membership management for ad-hoc groups / Tuomas Aura, Microsoft Research

Room TP4, Computer Laboratory

We present an architecture for creating groups, managing their membership and proving membership in ad-hoc networks. Ad/hoc networks are formed on demand without support from pre-existing infrastructure such as central servers, security associations or PKI. The networks must continue functioning - as securely as possible - even when communication between the network nodes is only occasional and nodes unexpectedly fail or leave the network. Our architecture is based on key-oriented public-key certificates. (This is based on joint work with Silja Maki and Maarit Hietalahti, and it was funded by the Finnish defense forces.)

View original page

23 January 16:15On message integrity in symmetric encryption / Virgil Gligor, University of Maryland

Room TP4, Computer Laboratory

TBA

View original page

18 January 11:00Architectural support for copy and tamper resistant software (at 11:00am) / Chandramohan Thekkath, Compaq SRC / Stanford

Room TP4, Computer Laboratory

Implementing copy protection on software is a difficult problem that has resisted a satisfactory solution for many years. This paper proposes a set of features that allows a machine to execute XOM code: code where neither the instructions or the data are visible to entities outside the running process. To support XOM code we use a machine that supports internal compartments, where a process in one compartment cannot read data from another compartment. All data that leaves the machine is encrypted, since we assume secure compartments cannot be guaranteed by anything outside the machine. The design of this machine poses some interesting trade-offs between security, efficiency and flexibility. We explore some of the potential security issues as one pushes the machine to become more efficient and flexible. Our analysis indicates, while not cheap, it is possible to create a normal multi-tasking machine where nearly all applications can be run in XOM mode.

2000

View original page

29 November 16:15Model checking security properties of cryptographic protocols / Marcelo Fiore, University of Cambridge

Babbage Lecture Theatre

I will consider the problem of automatically verifying cryptographic protocols. In particular, I will present an algorithm that, given a finite process describing a protocol in a hostile environment, computes a model in which security and authentication properties can be checked. This algorithm, I hope, will serve as the basis for a verification tool.

View original page

21 November 16:15A nested mutual authentication protocol / Dave Otway, Citrix Research

Room TP4, Computer Laboratory

This authentication protocol is a generalisation of the Otway-Rees protocol in which the common challenge is replaced by component nesting so that it can be applied to object-based, client-server chains involving any number of objects and principals. Each object in a chain, whether acting in a client or server role, handles authentication with its neighbours, without any need to be aware of the resultant global behaviour. Session keys are returned by an authentication server which services a client-server chain as a whole: nested requests are built along the forward chain; the final server presents the whole package to the authentication server; and nested responses containing session keys are delivered back down the chain.

View original page

15 November 16:15Locality, independence and linearity / Glynn Winskel, Cambridge University

Babbage Lecture Theatre, New Museums Site

Starting with a process language for cryptographic protocols and a semantics designed to support reasoning about secrecy and authentication, I'll illustrate the roles of locality, independence and linearity in understanding and reasoning about distributed processes. This will lead on to a sketch of the broader research interests of myself and students.

View original page

14 November 16:15Living with rip / Charles Lindsay, University of Manchester

Room TP4, Computer Laboratory

The passage of the Regulation of Investigatory Powers Act through parliament was the occasion of much controversy, especially as regards its provisions relating to cryptography. It appeared that it breached the European Convention on Human Rights at many points, and that the possibility of having their private keys seized would drive many E-commerce businesses overseas. In the event, the Act was amended to mitigate the worst excesses, with the simultaneous introduction of much window dressing. Nevertheless, many lesser problems remain, which may or may not be addressed in the Code of Practice. Since the implementation of that part of the Act has now been postponed for a year, we may have to wait some considerable time before the full picture becomes clear.

View original page

7 November 16:15Security attributes in corba / Ulrich Lang, University of Cambridge

Room TP4, Computer Laboratory

This talk discusses the difficulties of describing an appropriate notion of the security attributes 'caller' and 'target' in object-oriented middleware systems such as CORBA.

View original page

31 October 16:15Practical traceability 101 / Richard Clayton, University of Cambridge

Room TP4, Computer Laboratory

The Internet and its protocols provide methods by which it is possible to locate the person or machine responsible for a particular action. In many ways, "traceability" should be seen as the opposite of "anonymity".

View original page

24 October 16:15Auctions over anonymous networks / George Danezis, University of Cambridge

Room TP4, Computer Laboratory

The most popular way to attack protocols that provide anonymity to the participants is to use the provided anonymity to cheat. It is then very difficult to trace the cheaters and special mechanisms must be present in the protocols to help with that task. We will discuss the example of anonymous auctions and the various ways participants can cheat. We will refine the proposed protocols to support "identity escrow", so that the identity of the cheaters can be revealed, by a third party, if the protocol has not been followed.

View original pageView slides/notes

17 October 16:15Two new signature schemes / Ron Rivest, MIT

Room TP4, Computer Laboratory

We describe two new signature schemes with interesting algebraic properties.

View original page

10 October 16:15Do we have enough accidents? / John Adams, University College London

Room TP4, Computer Laboratory

Risk management is often done badly. Directly perceptible risks are dealt with instinctively and intuitively, but when the science is inconclusive people are liberated to argue from pre-established beliefs, convictions and prejudices. When unconfirmed hypotheses - `virtual risks' - get mistaken for risks about which science has clear and useful advice to offer, much confusion results.

View original page

3 October 16:15The xenoservice - a distributed defeat for distributed denial of service / Jianxin Yan, Stephen Early, University of Cambridge

Room TP4, Computer Laboratory

Distributed Denial of Service attacks have become a serious problem since the second half of 1999. They are a manifestation of what economists call the `tragedy of the commons': while everyone may have an interest in protecting a shared resource (Internet security), individuals have a stronger motive to cheat (connecting insecure computers). So we doubt that some of the proposed technical countermeasures will work, as they take insufficient account of economic forces. In this talk, we discuss the XenoService, a possible remedy.

View original page

27 June 16:15Telecomms fraud - no 'them' and 'us' any more / Richard Cox, Mandarin Technology

Room TP4, Computer Laboratory

Once upon a time there was the GPO, who were required by law to run all the Nation's communications - be they written, telegraph or voice. Fraud was easy to perpetrate in those days because of the somewhat crude methods used to control the switched network. Nowadays BT, who inherited the role the GPO held as the provider of Universal Service for telephony and telex, are but one of over 200 licensed network operators: and all of these are to some extent at risk of becoming victims of fraud.

View original page

23 June 13:00Information warfare in the 21st century / Whitfield Diffie

Room TP4, Computer Laboratory

The early years of the 21st century will be dominated by explosive expansion of communications. The bandwidth, flexibility (particularly mobility), and range of services available will support an electronic commerce to which the currenty hype cannot do justice. Society's resulting dependence on this resource will make it the target of first resort in future conflicts, continuing the 20th century trend toward involvement of civilian populations.

View original page

20 June 16:15Revisiting protocol modelling / Susan Pancho, Computer Laboratory

Room TP4, Computer Laboratory

Most of the existing work on security protocol analysis concentrates on finding guarantees of correctness. In some cases, analysis using one tool may find a ``new'' flaw that was not detected by another tool. Such results are sometimes attributed to the use of more rigorous tools.

View original page

13 June 16:15Mimesis - operating system support for confined execution environments / Stephen Early, Computer Laboratory

Room TP4, Computer Laboratory

Any program can create an environment in which to run another program, controlling every aspect of its operation. Trivially, but inefficiently, this can be done by binary emulation. More usefully, most current processors provide sufficient support for confined programs to be executed natively.

View original page

6 June 16:15Electronic commerce: who carries teh risk of fraud? / Ian Brown, University College, London

Room TP4, Computer Laboratory

`Non-repudation' is a favourite buzzword in e-commerce discussions, and a major part of much new digital signature legislation. But its use outside its original security context is riven with problems. This talk looks at the technical, usability, and legal difficulties associated with non-repudiation in the real world, and their effect on the allocation of risk in e-commerce. Banks have successfully moved the risk of online credit card transactions to merchants. Can they shift banking risk to consumers so easily?

View original page

2 June 16:00Security in an international electronic payment system (4pm) / Marijke De Soete, Europay International

Room TP4, Computer Laboratory

Europay is an international payment scheme with over 220 million cards licensing the brands Maestro, Cirrus, Eurocard and Eurocheque. It is currently migrating its magstripe-card based system to chipcard technology. The talk will highlight the security architecture of the new debit-credit system which is based on the so-called EMV (Europay-Mastercard-Visa) specifications. Furthermore the PKI will be presented which supports the offline chipcard authentication method.

View original pageView slides/notes

30 May 16:15How the credit card system *really* works / Alan Solomon

Room TP4, Computer Laboratory

Credit cards are the currency of the internet. But they aren't greasing the axles of commerce, because they weren't designed for customer-not-present.

View original page

9 May 16:15Hardware security modules in electronic commerce / Nicko van Someren, nCipher

Room TP4, Computer Laboratory

In this talk we will look at the cryptographic requirements for electronic commerce and how hardware security modules (HSMs) can help address these needs. We will examine the threat models and security policies commonplace in e-commerce and we will look at how various types of HSMs can help. We will then look at how existing HSMs could be improved to provide more secure solutions in the future.

14 March The clash between users' and security departments' perceptions / Anne Adams, Middlesex University

View original page

7 March 17:00Codebreaking in the cold war / Christopher Andrew, University of Cambridge

Hopkinson Lecture Theatre, Computer Laboratory

No history of the Second World War nowadays fails to mention the important role of signals intelligence (SIGINT) . By contrast, SIGINT is entirely absent from most studies of the Cold War. Newly declassified material in the West, as well as highly classified material exfiltrated from KGB archives by Vasili Mitrokhin, shows, however, that SIGINT continued to play a major role. The KGB supplied the Soviet leadership throughout the Cold War with far more high-grade diplomatic SIGINT (including decrypts from major NATO governments) than they could possibly read. In many cases agent penetration was able to resolve the problems caused by the increasing complexity of cipher systems. Among the revelations in recently declassified Western SIGINT is the identification of a Cambridge scientist as the youngest major spy of the twentieth century.

View original page

29 February 16:15Senss bruce - developing a tool for secure bulk systems integrity-checking / Alec Muffett, Sun Professional Services

Room TP4, Computer Laboratory

`SENSS Bruce' is a new security tool, being made available for free by Sun Microsystems, under the terms of the Sun Community Source License. Bruce provides a high-integrity, highly-trustworthy, hierarchical and scalable framework for pro-active security/integrity checking on an network-wide basis. This presentation will describe Bruce's design, functionality, and cover the benefits and weaknesses of Java, which was used as the platform for implementing Bruce.

View original page

15 February 16:15Distributed authorisation for enterprises / Vijay Varadharajan, Microsoft Research

Room TP4, Computer Laboratory

As organisations migrate to a distributed computing environment, the administration of security policies, in particular authorisation policies, becomes increasingly important. In this talk, we will consider some issues involved in the design of an authorisation system for distributed systems. We will discuss some of the architectural principles involved and consider an authorisation policy language and give some examples of policy specifications. We will conclude the talk by looking at some further work in this area.

View original page

8 February 16:15The shadow of your soul / Alastair Kelman, LSE

Room TP4, Computer Laboratory

The term `data shadow' covers the concept that combining different types of records (toll records, credit records, bank records, health records etc) can elicit additional information, a data shadow, which can track the life of an individual. Now in 2030 our society is managed in every aspect by shadow watching - said to be `the most significant tool for the maintenance of law and order by the European army and for the selling of Government services' (Prime Minister Sir Chris Evans Guildhall speech - January 2029 ).

View original page

8 February 14:00Secure and selective dissemination of xml documents / Elisa Bertino, Universita' degli Studi di Milano

Microsoft Research, Cambridge

XML (eXtensible Markup Language) has emerged as a relevant standard for document representation and exchange on the Web. It is often the case that XML documents contain information of different sensitivity degrees, which must be selectively shared by (possibly large) user communities. There is thus the need for models and mechanisms enabling the specification and enforcement of access control policies for XML documents. Mechanisms are also required enabling a secure and selective dissemination of documents to users, according to the authorizations that these users have. In this talk, we first define a model of access control policies for XML documents. Policies that can be defined in our model take into account both user profiles, and document contents and structures. We also describe an approach, which essentially allows one to send the same document to all users, and yet to enforce the stated access control policies. Our approach consists of encrypting different portions of the same document according to different encryption keys, and selectively distributing these keys to the various users according to the access control policies. We show that the number of encryption keys that have to be generated under our approach is minimal.

View original page

1 February 16:15The interaction between fault tolerance and security / Geraint Price, CCSR University of Cambridge

Room TP4, Computer Laboratory

Most existing work which merges Fault Tolerance into Security concentrates on using fault tolerance as a means of bolstering a server's resilience to external attack. The most notable of this work is carried out by Reiter on Rampart.

1999

View original page

7 December 16:15Authentication primitives and their compilation / Cedric Fournet, Microsoft Research

Room TP4, Computer Laboratory

Adopting a programming-language perspective, we study the problem of implementing authentication in a distributed system. We define a process calculus with constructs for authentication and show how this calculus can be translated to a lower-level language using marshalling, multiplexing, and cryptographic protocols. Authentication serves for identity-based security in the source language and enables simplifications in the translation. We reason about correctness relying on the concepts of observational equivalence. (This is joint work with Martin Abadi and Georges Gonthier)

30 November What are principals? / Dieter Gollmann, Microsoft Research

View original page

23 November 16:15Secure reachability management in mobile communications / Kai Rannenberg, Microsoft Research

Room TP4, Computer Laboratory

The increased technical availability provided by mobile communication necessitates support for users so that they can control their personal reachability (personal reachability management). This talk reports on a PDA and mobile phone based prototype functioning primarily as a reachability manager to avoid annoying calls and overcome the CallerID problem. Its core functionality is to enable parties to negotiate, e.g. the urgency of a telephone call, and by that maintain security that respects the interests of all involved parties (multilateral security).

View original page

2 November 16:15The factorisation of rsa-155 / Paul Leyland, Microsoft Research

Room TP4, Computer Laboratory

The RSA cryptosystem is very widely used. A particularly visible application is to protect and authenticate e-commerce transactions and it has been estimated that about 95% of all web-based e-commerce uses 512-bit RSA keys. As the security of RSA is no better than the difficulty of factoring a key's public modulus, progress in integer factorisation directly measures the security of RSA keys of any particular size.

View original page

19 October 16:15Verifying security protocols based on smart cards / Giampaolo Bella, Cambridge University

Room TP4, Computer Laboratory

Smart cards can be formalised realistically within Paulson's inductive approach for security protocols. The cards can be stolen and/or cracked by an eavesdropper. The kernel of their built-in algorithm works correctly, so they can't be used as oracles, but their I/O interface doesn't, so they send correct outputs unreliably.

View original page

12 October 15:00Elliptic curves in cryptography / Nigel Smart, Hewlett-Packard Laboratories

Microsoft Research Ltd, St George House, 1 Guildhall Street, Cambridge.
Doors will be open between 2.45pm and 3.15pm

In the past few years elliptic curve cryptography has moved from a fringe activity to a major challenger to the dominant RSA/DSA systems. Elliptic curves offer major advances on older systems such as increased speed, less memory and smaller key sizes. As digital signatures become more and more important in the commercial world the use of elliptic curve-based signatures will become all pervasive.

View original page

20 July 16:15Model checking to verify computer security policies / Robert Watson, TIS/Carnegie Mellon University

Room TP4, Computer Laboratory

Model checking is a method of formally verifying properties of finite state machines. By describing operating system structure and system authorization policies using finite state machines, model checking may be used to verify useful properties of policies, improving the chances of developing a secure system. The technique is demonstrated on authorization systems from an Active Network, and from a simplified UNIX-like environment.

View original page

15 June 16:15Using nt to handle classified information / Simon Wiseman, DERA, Malvern

Room TP4, Computer Laboratory

Modern interconnected computer systems handling classified information can be built using Windows NT. The architecture provides each user with a private desktop in which to work, along with services for sharing data. Within a desktop, the user is helped to attach security labels to their data. When data is shared, labelling prevents accidental compromise, but other measures defend against other forms of compromise.

View original page

8 June 16:15Algebraic properties of encryption and the verification of authentication protocols / Katherine Easthaughffe, University of Cambridge

Room TP4, Computer Laboratory

Most approaches to formal verification of authentication protocols assume encryption to have the property that parts of a message cannot be extracted without knowledge of the encrypting key. In practice, implementations are not perfect in this sense and the correctness of a protocol may depend on the algebraic properties of encryption.

View original page

25 May 16:15The cocaine auction protocol / Francesco Stajano, University of Cambridge

Room TP4, Computer Laboratory

Traditionally, cryptographic protocols are described in terms of a sequence of steps, each of which sees one principal sending a message to another principal. It is implicitly assumed that the fundamental communication primitive is necessarily one-to-one and protocols addressing anonymity tend to resort to a highly redundant composition of multiple elementary transmissions in order to frustrate traffic analysis. This talk, building on the case study of an anonymous auction between mistrustful principals with no trusted arbitrator, presents "anonymous broadcast" as a new protocol building block. This lower-level primitive is, in its class of cases, a more accurate model of what actually happens in local area networking and, with certain restrictions, can be used as a particularly efficient implementation technique for many anonymity-related protocols.

View original page

18 May 16:15On integrity-aware symmetric encryption schemes / Virgil Gligor, University of Maryland

Room TP4, Computer Laboratory

A large variety of encryption schemes, or modes, have been proposed to date, and some of these are known to be secure against adaptive, chosen-plaintext attacks. In this presentation, I define a joint condition on any such secure scheme and any high-performance Manipulation Detection Code (hpMDC) function, such as XOR, CRC-32, modular addition, or simply a constant, to counter adaptive chosen-message attacks, namely both adaptive chosen-plaintext and chosen-ciphertext attacks, that lead to message forgeries. I also illustrate two applications of the joint condition in practice, namely (1) the design of fast encryption-with-integrity schemes and (2) the optimal selection of a hpMDC function for a given encryption scheme.

View original page

11 May 16:15Multi-grade cryptography for integer factorisation based cryptosystems / Wenbo Mao, Hewlett-Packard Laboratories

Room TP4, Computer Laboratory

Rivest suggested the idea of multi-grade cryptography, which lets a cryptosystem present multiple levels of security under different circumstances. For instance, to an external law enforcement agent, the cryptosystems of the users in an organisation might show a high level of security (e.g., equivalent to a 64-bit key-search). Once this high-level security ``shell'' is broken with a non-tivial effort, each user's key becomes an easier computational problem (e.g., 40-bit key-search). To any other parties who cannot afford to break the shell, user security is an intractable problem. An important point in muti-grade cryptgraphy is that the external law enforcement agent should only need to break the an organisation's shell once.

View original page

5 May On the security analysis of symmetric encryption schemes / Virgil Gligor, University of Maryland

4 May Penetration analysis methods and tools / Virgil Gligor, University of Maryland

View original pageView slides/notes

23 February 16:15Delegation of responsibility / Bruno Crispo, University of Cambridge

Room TP4, Computer Laboratory

Let us consider the case of the company president who delegates the power to sign certain documents to her secretary. If the president never cheats, then many existing mechanisms are sufficient to implement this. But what if the president suddenly announces that her secretary has been sacked because of a mistake in a very important document? It may well be that the secretary did not made a mistake: but with almost all the existing mechanisms, she has no way of demonstrating that it was the president, and not she, who created or authorised the disputed document.

View original pageView slides/notes

22 February 11:30The ibm 4758 secure cryptographic coprocessor hardware architecture and physical security / Steve Weingart, IBM

Room TP4, Computer Laboratory

IBM has been working in the field of Secure Cryptographic Coprocessors since the early 1980's. This talk will briefly discuss the history of IBM's efforts, then go on to discuss the hardware architecture in and the physical security design.

The hardware architecture will be shown from a performance standpoint, discussing the ideas that worked and the ones that didn't.

The physical security design was the first ever to be validated at FIPS 140-1 level 4. The principles of the design will be described and the manufacturing implications will be discussed.

Presentation material

View original pageView slides/notes

22 February 10:00Computer subsystems: a survey of attacks and defenses / Steve Weingart, IBM

Room TP4, Computer Laboratory

As the value of data on computing systems increases and operating systems become more secure, physical attacks on computing systems to steal or modify assets become more likely. This technology requires constant review and improvement, just as other competitive technologies need review to stay at the leading edge.

This talk describes known physical attacks ranging from simple attacks which require little skill or resource, to complex attacks which require trained, technical people and considerable resources. Physical security methods to deter or prevent these attacks are presented. The intent is to match protection methods with the attack methods in terms of complexity and cost. In this way cost effective protection can be produced across a wide range of systems and needs.

Specific technical mechanisms now in use will be discussed, as well as mechanisms proposed for future use. Common design problems and solutions are discussed with consideration for manufacturing.

Presentation material

View original pageView slides/notes

16 February 16:15Access control in an open distributed environment / Richard Hayton, Citrix

Room TP4, Computer Laboratory

This talk is an overview of the Oasis access control architecture. This provides both a means for specifying complex authorisation information in an open distributed environment, and an efficient implementation.

View original page

9 February 16:15Matching digital watermarking methods to real data / David Hilton, Signum Technology

Room TP4, Computer Laboratory

Recent years have seen a great proliferation of papers on watermarking of digital data. These have usually started from a very generalised view of the nature of the data and concentrated on the quality of the security algorithm.

View original page

3 February 16:15The power of quantum computing / Professor Richard Jozsa, University of Plymouth, School of Mathematics and Statistics

Babbage Lecture Theatre

The recent synthesis of quantum physics with computer science has led to a new paradigm for computation which is in principle physically realisable, yet not fully encompassed by the standard (e.g. Turing) notion of computability. A quantum computer cannot compute any non-Turing-computable function but it appears to be able to perform some computations exponentially faster than any classical device. The pre-eminent example is the existence of a polynomial-time quantum algorithm for integer factorisation - a problem for which there is no known classical (even randomised) efficient algorithm. In recent developments, quantum physics also gives rise to new modes of communication and an associated quantum information theory.

In this talk I will introduce the essential principles of quantum computation and outline the structure of some fundamental quantum algorithms. I will discuss the relation of quantum computation to various classical complexity classes and finally consider some recent issues of current interest.

This talk will be held in the Babbage Lecture Theatre. Maps and travelling directions are at http://www.cl.cam.ac.uk/site-maps/site-maps.html.

View original page

2 February 16:15Authentication - again! / Dieter Gollmann, Microsoft Research

Room TP4, Computer Laboratory

It is a popular conjecture that the design of authentication is an error prone and hence difficult task. Once again, I will try to explain how this situation may have come about.

As a general observation, one may note that in many areas of science progress in the understanding of fundamental concepts has gone hand in hand with the development of a language for discussing these concepts. The difficulty of giving good definitions for authentication bears witness to this problem. In a specific observation on authentication, I will illustrate that the term authentication is used in a number of different security paradigms, a fact that can only add further confusion.

Not surprisingly, I will argue that more precision in the discourse about authentication is required. In this respect, designers and attackers have been equally culpable so far.

View original pageView slides/notes

26 January 16:15Experience in aes algorithm implementation / Brian Gladman (formerly MoD and NATO)

Room TP4, Computer Laboratory

In its Advanced Encryption Standard (AES) programme the US National Institute of Standards and Technology has selected 15 algorithms for consideration as candidates to replace the now obsolescent DES standard.

This talk will look at some of the issues that the author has faced in implementing all 15 candidates from scratch. The coverage will focus on implementation and performance rather than on security or cryptanalysis. In particular the issues involved in using algorithm specifications as a basis for implementation in C will be discussed, as will some of the surprises involved in running such code on modern pipelined/semi-parallel architecures such as the Pentium II. The talk will also cover an interesting aspect of performance optimisation for Serpent.

Presentation material

View original page

12 January 16:15Us crypto policy: explaining the inexplicable / Susan Landau, Sun Microsystems Inc.

Room TP4, Computer Laboratory

The richest, strongest, most electronically-vulnerable nation on earth persists in a policy that effectively restricts the use of encryption technology domestically as well as abroad. Even while the security of transactions over telephone and computer networks has become a source of wide public concern, the US government continues to work against the proliferation of unbreakable cryptography (and thus perfectly concealable communications).

In this talk we present a brief history of wiretap law and privacy rulings in the United States, and we put current crypto policy in the context of decisions made over the last twenty years.

1998

View original page

8 December 16:15Realising security policy within the healthcare environment / Steve Furnell, University of Plymouth

Room TP4, Computer Laboratory

Information systems security represents a significant issue within the modern healthcare environment. Information technology now pervades virtually all aspects of operation and care provision, with a consequent need arising to preserve the confidentiality, integrity and availability of systems and data. The security policy is an essential element in ensuring that a consistent approach can be enforced and maintained across the establishment. I will discuss the areas that should be encompassed by any policy, as well as the typical constraints of the healthcare environment that may limit the practical approach. A further important consideration is how to ensure that all staff will know and observe the policy. I will address this through a discussion of security training and awareness initiatives.

The presentation will make significant reference to work that has been conducted at the European level, in particular the ISHTAR (Implementing Secure Healthcare Telematics Applications in Europe) project in which I have been involved under the EU `Telematics Applications for Health' programme.

View original page

1 December 16:15Secure sessions from weak secrets / Bruce Christianson, University of Hertfordshire

Room TP4, Computer Laboratory

Sometimes two parties who share a weak secret k such as a password wish to share a strong secret s such as a session key without revealing information about k to an active attacker. This talk describes some recent work in this direction, carried out jointly with Michael Roe and David Wheeler. We present some new protocols for secure strong secret sharing, including one based on RSA rather than Diffie-Hellman. As well as being simpler and quicker than their predecessors, our protocols also have slightly stronger security properties. In particular, they make no cryptographic use of s and so impose no subtle restrictions upon the use which is made of s by other protocols, and they do not rely upon the existence of hash functions with mystical properties. After rounding up the usual suspects, the talk will also consider some new attacks and how to frustrate them.

View original page

24 November 16:15Observations on the advanced encryption standard candidates / Mike Roe, Centre for Communications Systems Research

Room TP4, Computer Laboratory

The US government is running a competition to find a replacement for the data encryption standard. There are fifteen candidate algorithms now available for public analysis and comment. I have implemented a number of them from the published definitions, and in this talk I will discuss the lessons I learned in the process.

View original page

17 November 16:15Cryp, cip and cots: trusting cryptography in commercial-off-the-shelf systems / Bill Caelli, Queensland University of Technology

Room TP4, Computer Laboratory

Cryptographic (CRYP) sub-systems now play a vital role in the protection of "mission-critical" information systems and data networks, particularly those now being deployed for electronic commerce activities nationally and internationally. Such mission-critical information systems, and associated data networks, are, in turn, being used to control and monitor critical infrastuctures in modern society; infrastructures that need a high degree of protection (CIP). These include overall structures for water reticulation, electricity, finance, government, energy, transport and so on. However, under cost pressures those in charge of such infrastructures are moving to adoption of commercial-off-the-shelf (COTS) systems for the control and monitoring of such infrastructures, rather than "bespoke" solutions to information systems needs. With cryptography forming the main protection and trust mechanism to safeguard these controlling information systems, the trustworthy integration of cryptographic sub-systems into COTS becomes of paramount importance. This has a number of technical, business and political implications that need to be explored. This paper examines all three of these aspects of the cryptography integration problem.

View original page

16 November 16:15A hacker looks at cryptography / Bruce Schneier, Counterpane Systems

Room TP4, Computer Laboratory

Building a secure product is a lot more than reading a copy of Applied Cryptography, and then stringing a series of secure algorithms and protocols together. Many "buzzword compatible" products are insecure not because of faulty mathematics, but faulty implementation. Engineers misuse secure primitives, introduce security flaws elsewhere in the process, build bad user interfaces, don't allow for errors or failures, and generally fail to leverage the security of their cryptography. This talk is about what commonly goes wrong in cryptographic products.

View original pageView slides/notes

10 November 16:15Copyright control for digital image libraries / Glenn Hall, Hewlett-Packard Laboratories

Room TP4, Computer Laboratory

We will talk about copyright control for digital image libraries using high quality imaging systems, over the web. We have built a system, using on-the-fly watermarking, for a commercial image supplier, now on trial. This raises a number of interesting technical and business questions, such as watermark distrubution, and cascading permissions through business processes.

View original page

3 November 16:15Alpha pulse technology - a new concept for generating true randomness / Mark Shilton, Amersham Pharmacia Biotech

Room TP4, Computer Laboratory

The Alpha Pulse random generator is a miniature hardware device for triggering random events with a predetermined event probability. The device uses a miniature silicon photo diode detector incorporating a harmless quantity of a radioactive alpha emitting material. The device produces random voltage pulses when alpha particles are emitted within the photo diode. The device has been used to generate pure, unbiased, non-deterministic random numbers and also to trigger random win events with long odds for applications such as gaming. The event probabilities produced by the device agree very closely with the predictions of Poisson theory.

The Alpha Pulse random generator is robust, durable, highly tamper resistant; it is unaffected by external influences and potentially can be made very small. Its operating principles, design, performance and applications will be reviewed.

View original page

27 October 16:15On the security of digital tachographs / Ross Anderson, University of Cambridge

Room TP4, Computer Laboratory

Tachographs are used in most heavy vehicles in Europe to control drivers' hours, and for secondary purposes ranging from investigating accidents and toxic waste dumping to the detection of fuel fraud. Their effectiveness is under threat from increasing levels of sophisticated fraud and manipulation. I will discuss this in the context of recent EU proposals to move to smartcard-based tachograph systems, which are aimed at cutting fraud and improving the level of enforcement generally. I will argue that the proposed new regime will be extremely vulnerable to the wholesale forgery of smartcards and to system-level manipulation; it has the potential to lead to a large-scale breakdown in control. I will then sketch some potential solutions.

View original page

20 October 16:15Secure implementation of channel abstractions / Cedric Fournet, Microsoft Research

Room TP4, Computer Laboratory

Communication in distributed systems often relies on useful abstractions such as channels, remote procedure calls, and remote method invocations. The implementations of these abstractions sometimes provide security properties, in particular through encryption. We study those security properties, focusing on channel abstractions. We introduce a simple high-level language that includes constructs for creating and using secure channels. The language is a variant of the join-calculus and belongs to the same family as the pi-calculus. We show how to translate the high-level language into a lower-level language that includes cryptographic primitives. In this translation, we map communication on secure channels to encrypted communication on public channels. We obtain a correctness theorem for our translation; this theorem implies that one can reason about programs in the high-level language without mentioning the subtle cryptographic protocols used in their lower-level implementation.

This is joint work with Martin Abadi (Compaq/SRC) and Georges Gonthier (INRIA Rocquencourt).

View original page

16 June 16:15Medical privacy protection - the xtrend project / Vaclav Matyas, University of Cambridge

Room TP4, Computer Laboratory

The Xtrend project involves collecting drug prescription (and collection) data from pharmacies and creating a database that supports evaluation of general practitioners' (GPs') prescription trends by district. The data is collected without patient identity information, but GPs' identity has to be protected carefully by subsequent processing - only some GPs have consented to their identity being known to data users (usually drug wholesalers or manufacturers) and the identity of the others has to be concealed.

The talk will analyse the problems in protecting the identity of the non-consenting GPs. The solution involves measures like setting a minimal number of participating GPs, practices and pharmacies in a district, and concealing the telltale signs of GPs moving between practices or going on holiday. Another interesting issue concerns the fact that the system is currently being built and this provides a certain level of `noise' against malicious data analysis. However, the situation once the system stabilises will almost certainly be different.

View original page

9 June 16:15The art of uncovering those well-hidden bits / Nick Howgrave-Graham, University of Bath

Room TP4, Computer Laboratory

The talk will be based loosely around the use of partial knowledge in solving bivariate Diophantine equations. Many interesting problems fall in to this category including factoring, and solving univariate modular equations, both of which have major implications in cryptography.

The methods are based on work by Coppersmith, and employ lattice basis reduction by the LLL algorithm. An interesting theoretical result concerning dual lattices and the LLL algorithm is shown along the way.

Finally a novel approach to fiding solutions to x^2+y^2=N is demonstrated, and applied (using the technique of Pinch and McKee) to breaking a recently proposed elliptic curve cryptosystem.

View original page

2 June 16:15A denotational definition of system integrity / Simon Foley, University College, Cork

Room TP4, Computer Laboratory

Conventional integrity models limit themselves to the boundary of the computer system and tend to define integrity in an operational or implementation oriented sense. For example, the Clark-Wilson model recommends that well-formed transactions, segregation of duties and auditing be used to ensure integrity. However, the model does not attempt to address what is meant by integrity - evaluating a system gives a confidence to the extent that good design principles have been applied. For instance, when we define a complex segregation of duty policy, we cannot use the model to guarantee that a user of the system cannot somehow bypass the intent of the segregation via some unexpected circuitous route.

Clark and Wilson informally identified segregation of duty as a mechanism that is used to control external consistency, which is described as the correct correspondence between the data object and the real world object that it represents. In this talk I will explore a formal definition for external consistency and illustrate how it is implemented in terms of segregation of duties. This denotational, rather than operational, definition is useful because it allows us to determine whether a particular segregation of duties configuration actually works, that is, whether it ensures that the system is externally consistent.

View original page

27 May 16:15Attacks on copyright marking systems / Fabien Petitcolas, University of Cambridge

Room TP4, Computer Laboratory

In the last few years, a large number of schemes have been proposed for hiding copyright marks and other information in digital pictures, video, audio and other multimedia objects. I will describe some contenders that have appeared in the research literature and in the field; I will then present a number of attacks that enable the information hidden by them to be removed or otherwise rendered unusable.

View original page

26 May 16:15Differential-linear weak key classes of idea / Philip Michael Hawkes, University of Queensland

Room TP4, Computer Laboratory

The International Data Encryption Algorithm (IDEA) is a well known block cipher which is used, for example, in the Pretty Good Privacy (PGP) package. In this talk, the largest known weak key classes of IDEA and reduced-round IDEA are constructed. For some of these classes, membership is determined by a differential-linear test while encrypting with a single key. In particular, $8.5$-round IDEA has a weak key class of $2^{63}$ keys (one in every $2^{65}$ keys) for which membership is determined in such a manner. A related-key differential-linear attack on 4-round IDEA is presented which is successful for all keys. Large weak key classes are found for 4.5- to 6.5-round and 8-round IDEA for which membership of these classes is determined by similar related-key differential-linear tests.

View original page

19 May 16:15Confessions of a red box builder / David Biggins, Rhea International Ltd

Room TP4, Computer Laboratory

In the world of commercial product development, even in a hi-tech environment, there are many conflicting factors that go to make up the success or otherwise of a product - technical, commercial, political, and just plain luck (good or bad).

Balancing these factors requires the patience of Job, the discretion of Caesar's wife, the judgement of Solomon (not Alan), the technical knowledge of Turing (Alan), the deviousness of the Borgias, the ruthlessness of Genghis Khan, the showmanship of PT Barnum, and the financial acumen of J Paul Getty - none of which I have...

So how DO you take a security product to market these days?

This talk aims to cover many of the factors, technical and otherwise, encountered so far in the development of the Latches for Windows product, and the ways we have managed to hang on to the tiger's tail...

12 May Cryptology, technology and policy / Susan Landau, University of Massachussetts

View original page

5 May 16:15The corba security service specification and corba security in practice / Ulrich Lang, University of Cambridge

Room TP4, Computer Laboratory

This seminar will first give a brief introduction to CORBA, and then focus on the CORBA Security Service Specification. The security functionality provided by the Security Service and its relevance to distributed systems security in general will be described on an abstract level. The seminar will also try to compare the Security Service Specification to CORBA security in the real world; issues like trust boundaries, Java security, business requirements etc. will be briefly put into context.

View original page

29 April 17:30Pgp and resistance to key escrow / Phil Zimmermann, Network Associates Inc.

Hopkinson Lecture Theatre, New Museums Site

This week's political developments highlight the trap of buying into a top-down key management infrastructure. I will talk about the new features of PGP's evolving architecture which we have specifically designed in order to make it resistant to key escrow while enhancing its scalability in large organisations.

NOTE: this week's seminar has been arranged at short notice in response to the government's U-turn on crypto policy. It is thus at a non standard time and a nonstandard venue. Maps and travelling directions can be found here.

Other relevant seminars this term include a talk on the 12th May by Susan Landau of the University of Massachussetts on `Cryptology, Technology and Policy' (Susan is one of the authors of `Privacy on the Line' which documents the crypto policy struggle in the USA) and another on the 19th by David Biggins of Rhea International Ltd entitled `Confessions of a Red Box Builder' (Rhea designed the new electronic red boxes used by some ministers). Both these talks are at the usual 4.15PM in room TP4.

View original page

10 March 16:15Priority driven protocol design / Bruce Christianson, University of Hertfordshire

Room TP4, Computer Laboratory

Priority Driven Communication Protocol Design was a methodology for designing communications protocols which was introduced about fifteen years ago. In this seminar I shall attempt to rehabilitate PDCPD in the context of security protocols, arguing that treating PDCPD as a conceptual framework for reasoning about the design and optimization of protocols (rather than as a design methodology per se) can provide insight into managing the effects of laying off tasks to only partially trusted third parties in order to improve performance: the analagous design problem in 'conventional' communications protocol design is de-layering.

View original page

3 March 16:15Videocrypt - past, present, and future / Yossi Tsuria, News Datacom, Israel

Room TP4, Computer Laboratory

VideoCrypt, with 9 million subscribers on 4 continents, is without doubt one of the most successful conditional access systems in the world. It also enjoys numerous attacks by the pirate community.

The presentation will describe the origins of the system and its key technology elements, and will discuss past and present security issues. It will also tackle future plans and challenges in the fields of interactive TV, copy protection and data broadcasting.

View original page

24 February 16:15Supporting dynamic security labels in multilevel secure object stores / Simon Foley, University College, Cork

Room TP4, Computer Laboratory

Mandatory label-based policies may be used to support a wide-range of application security requirements. Examples of these policies include Chinese Walls and Dynamic Segregation of Duties (see the seminar I gave on the 28th October 1997). Labels encode the security state of system entities and the application security policy specifies how these labels may change.

I will describe a framework, based on the Jajodia-Kogan message-filter model, that can support these policies in a multilevel secure OODBMS. This framework can support any (dynamic) label-based policy so long as the effect of a high-level request to relabel a low-level label cannot be detected at the low level. A sample policy will be described whereby high-level users can mark low-level objects, indicating that the object should be migrated to the high-level when deleted (at low).

The framework provides what is essentially an interpreter of multilevel programs: programs that manipulate multilevel data-structures that define the security labels of objects. This enables application functionality and security concerns to be developed (and verified) separately, bringing with it the advantages of a separation of concerns paradigm.

View original page

17 February 16:15Tamper resistant structured magnetics / Ed White, Thorn Secure Science International

Room TP4, Computer Laboratory

Security, and particularly 'Smart Card' security has become a very hot topic in the 1990's. We have been constantly 'educated' that Smart Cards are secure, and this series of seminars has spent much time examining the various claims and potential flaws in those claims. This talk will take a step back from the detail of smart card security, encryption algorithms etc. and examine the basic elements of security, It will briefly examine the various strengths and vulnerabilities of different approaches and present some ideas on how combining technologies can offer great benefits in reducing threats of security breaches.

View original page

10 February 16:15What are the wild waves saying? / Owen Lewis and Keith Penny, TEL

Room TP4, Computer Laboratory

So often overlooked by those who would maintain the confidentiality of their dealings, is that much of the most sensitive and most valuable information first occurs as the act of speech, a personal dialogue. If uninhibited speech can be eavesdropped as it is created, then there is no panoply of technical security that can subsequently make good that breach of security. Even in this computer age, the eavesdropping of speech in sensitive areas remains important in intelligence gathering, commercial as much as state.

This presentation outlines the main varieties of the electronic eavesdropping threat to confidential discussions and looks at advanced countermeasures to bugging where RF transmission is used to extract sensitive conversation from secured premises.

Until starting a technical surveillance countermeasures business in 1991, Owen Lewis was a signals officer in the British Army for 22 years. For some years, he was a visiting lecturer to the NATO Joint Services Advanced Electronic Warfare courses. Keith Penny is an engineer with 20 years of experience of the design, manufacture and systems deployment of a range of electronic surveillance and countersurveillance equipment. They have developed the SysRx system for RF spectrum monitoring, which is to be launched at the Police Scientific Development Branch closed exhibition in March 1998 and is first presented at this seminar.

View original page

4 February 16:15Hardware security: smartcards and other tamper resistant modules / Markus Kuhn, University of Cambridge

Babbage Lecture Theatre

Many computer security applications depend on the secure storage of secret key material. The processors storing these keys cannot be protected by walls and guards in applications such as digital purses or pay-TV encryption systems; often the key memory has to be given into the hands of the attacker. Smartcards and other tamper-resistant processors are frequently quoted as a solution for this problem, but there is little published material about how difficult it is for attackers to circumvent the physical protection of these low-cost devices. The talk will discuss various techniques that have been applied to break the security processors used in pay-TV encryption systems and digital purses with much less effort then the manufacturers had hoped.

View original page

28 January 16:15Security protocols and their correctness / Larry Paulson, University of Cambridge

Babbage Lecture Theatre

Security protocols are used in the Internet, mobile phones, digital payment systems, etc. Their goals may be to keep data secret, to preserve it from tampering, or to prevent intruders from assuming somebody else's name. A faulty protocol can be attacked by simple means, such as replaying parts of old sessions, without brute-force codebreaking.

Researchers have developed tools to search for such attacks. However, failure to find attacks does not mean that a protocol is correct. Protocols and their goals are seldom specified formally, which makes it hard to say whether they are correct, even when possible attacks are pointed out.

The speaker will outline recent approaches to showing correctness, taking as an example a simple public-key protocol.

1997

View original page

9 December 16:15Attacks on pay-tv access control systems / Markus Kuhn, University of Cambridge

Room TP4, Computer Laboratory

Subscription financed pay-TV channels such as BSkyB scramble their broadcast signal and provide their subscribers with special decoders to prevent unauthorized free access. Modern smartcard-based pay-TV access control systems like VideoCrypt are the first large scale consumer application of both cryptography and tamper resistant processors. Their security aspects have been scrutinized in the past five years by both professional pirate decoder manufacturers and amateur hackers. Professional pirates have developed reverse engineering skills previously only assumed to be available to major governments or corporations, while undergraduate students have found surprisingly simple and cheap ways to evade cryptographic protection schemes. There are valuable lessons to be learned for the design of future large-scale cryptographic applications.

View original page

2 December 16:15Fair and blind certification of knowledge / Wenbo Mao, Hewlett-Packard Laboratories

Room TP4, Computer Laboratory

We propose a new notion of fair and blind certification of knowledge. Blindly certified knowledge has a verifiable structure which can only be constructed by its exclusive owner with the help of a certification authority (CA). Verification of knowledge possession will include a simple check on the structure to convince a verifier of proper certification of the knowledge in addition to its exclusive belonging to the owner (prover). Unlike a blind signature, in blind knowledge certification no visible signature of the CA is available to the verifier and thus different sessions in which the same knowledge is used can easily be made unlinkable. As a result, a single blindly certified knowledge can be re-used polynomially many times without linking to the anonymous user. To prevent anonymity misuse with impunity, we also add fairness to the unlinkable anonymity by escrowing the knowledge to an off-line third party.

View original page

25 November 16:15A secure inter-hospital image reporting tool / Ed Somer, United Medical and Dental Schools

Room TP4, Computer Laboratory

A system has been developed that allows the rapid, secure and private transmission of Nuclear Medicine and Positron Emission Tomography (PET) image data with associated patient files between a number of hospital sites facilitating clinical collaboration and accelerated education of Nuclear Medicine physicians.

Participating departments maintain an mSQL database populated with DICOM objects generated from Interfile or proprietary data. The database contains patient and study information and references to image data which may, or may not, be in DICOM format. An internet navigator is used to query the database and Multipurpose Mail Extension (MIME) typing enables the navigator to download a specific image, and launch a viewer appropriate to the data format and viewing platform. The clinicians can then report the images and submit a text report via either fax or over the internet. The protocols are independent of the network infrastructure linking the hospital sites and data transfer has been secured through the Secure Socket Layer (SSL). ATM networking has made the establishment of a dial-up telemedicine virtual LAN practical offering enhanced security.

Nuclear Medicine and PET departments on three hospital sites are currently involved in this project and the scaleable nature of this solution makes further expansion practical.

View original page

18 November 16:15The discrete logarithm problem on elliptic curves / Nigel Smart, Hewlett-Packard Laboratories

Room TP4, Computer Laboratory

In recent years the use of elliptic curves in cryptography has become something of a "hot topic". This is because the discrete logarithm problem on elliptic curves is in general harder than the associated problem in the more traditional finite fields. In this talk I shall outline how one should choose one's elliptic curve by showing how certain attacks work. In particular I shall concentrate on the recent attack on "trace one curves".

View original page

28 October 16:15Dynamic separation of duties in the clark-wilson model: shifting trust in the application back into the tcb / Simon Foley, University College, Cork

Room TP4, Computer Laboratory

The Clark-Wilson security model may be used for systems where security is enforced across both the operating system and the application systems. Under this model, a secure system may be viewed as a certified application running on top of a trusted computing base (TCB). Certifying an application corresponds to arguing (to a degree) its correctness; the TCB is expected to have undergone some sort of security evaluation. A variety of existing implementation models, for example multilevel security, have been shown capable of upholding the Clark-Wilson TCB requirements.

We argue that, given an evaluated TCB, an application designer should try to minimize the amount of security critical code that is contained within the application and rely on the TCB to enforce security wherever possible. Under the Clark-Wilson model, the TCB is expected to support (enforce) static segregation of duties. However it appears that dynamic segregation of duty must be implemented within the application itself.

In this talk I will describe a framework in which dynamic Clark-Wilson style segregation of duty policies can be expressed and supported by the TCB. I will also describe how these policies can be enforced under Unix and Multilevel TCBs.

View original page

21 October 16:15Inductive analysis of the internet protocol tls / Larry Paulson, University of Cambridge

Room TP4, Computer Laboratory

Internet browsers use security protocols to protect confidential messages. An inductive analysis of TLS (a descendant of SSL 3.0) has been performed using the theorem prover Isabelle. Proofs are based on higher-order logic and make no assumptions concerning beliefs or finiteness. All the obvious security goals can be proved; session resumption appears to be secure even if old session keys have been compromised. The analysis suggests modest changes to simplify the protocol.

TLS, even at an abstract level, is much more complicated than most protocols that researchers have verified. Session keys are negotiated rather than distributed, some messages are optional, and others may be sent at various times. The resources needed to verify TLS are modest: the inductive approach scales up.

View original page

14 October 16:15Edifact security and the public key infrastructure / Peter Landrock, Cryptomathic and Aarhus University

Room TP4, Computer Laboratory

The purpose of EDI is to process business data in an automated way. This used to be handled on a bilateral basis between contracting parties using leased lines etc., but when parties without an initial contract do business over the Internet and/or X.400, it is absolutely vital to secure the interchanges by what we call security services: non-repudiation of origin/receipt and confidentiality are the obvious choices.

Back in the early 90's, the UN Security Joint Working Group came up with a number of proposals for the integration of these services at the EDIFACT syntax level, thus making them independent of the transport mechanism in use. This implied that the EDI translators would handle security, and in an automated fashion. The next step was to bring the supporting public key infrastructure with communicating CAs, LRAs and Directories into play, and a new EDIFACT message, KEYMAN, was designed to handle this.

The EDIFACT certificate thus derived was far more business minded than the original X.509 certificate in its design. The PKI and the underlying business model will be described, and we will explain how to avoid blacklists. We believe this model is the right one to take forward in electronic commerce, and this is exactly what we are doing in large pilots such as SEMPER, BOLERO, DYP, and ELSME.

View original page

3 October 10:00Pachyderm: keeping your email on the web / Michael Schroeder, Digital Systems Research Center

Room TP4, Computer Laboratory

Pachyderm is an experimental email system exhibiting the following properties:

The Platform is the Web: all interaction with Pachyderm is through a web browser; you can create, read and browse your email from any web-connected computer.

Location Independence: there is no state locked in particular client computers; you can move among client computers at will, and your email state will still be available to you.

Bandwidth Tolerance: Pachyderm is designed to tolerate a wide range of connectivity bandwidths, from high-speed local area networks to dial-up modems.

Easy Data Retrieval: all access to your email is based on queries on a full-text index; you can find the right message from among tens of thousands without any need for manual classification schemes such as folders.

The talk will outline the rationale for Pachyderm and describe the structure of the system.

NOTE: the time is nonstandard for a security seminar.

View original page

29 September 11:15Composable and emergent security properties / Heather Hinton, Ryerson Polytechnic University

Room TP4, Computer Laboratory

Emergent behaviours are those that result from interaction between the behaviour of the components of a composite system. We show that they play a role in the composite system's security properties: they may give rise to vulnerabilities directly, or result in the non-composability of security properties.

Using an emergent properties analysis, we can identify which aspects of component behaviour lead to undesirable emergent behaviour. This may enable us to strengthen individual systems so that desired properties compose. We can also use this approach to identify, a priori, when non-composable properties will be violated within a composite system.

We have shown how to apply this approach to several toy examples and are currently using it to analyse a Network Reference Monitor.

NOTE: the time is nonstandard for a security seminar.

View original page

25 September 9:30Security analysis of rsa-type cryptosystems / Marc Joye, Katholieke Universiteit Leuven

Discussion Room, Computer Laboratory

In 1978, Rivest, Shamir and Adleman introduced the public-key cryptosystem RSA. Thereafter, it was extended to Lucas sequences and elliptic curves. In this talk, we will analyse the security of these cryptosystems in given contexts. In particular, some major known attacks against RSA-type systems will be reviewed. We will also see how these attacks can be avoided.

NOTE: the time and the place are nonstandard for a security seminar.

View original page

29 August 16:00Visual cryptography with polarisation / Eli Biham, Technion, Haifa, Israel.

Room TP4, Computer Laboratory

Visual cryptography was introduced by Naor and Shamir as a way to allow fast visual decryption of graphic objects. No decryption device is required; instead decryption is done by fitting slides together. Several schemes were suggested which allow users to share secret pictures (and text) in an information theoretically secure way, so that deciphering is easy if all the shares are given, but it is impossible if one of them is missing. The drawbacks of all the existing methods are the exponentially small contrast of the deciphered picture as the number of shares increases, and the reduction in quality due to pixels' being represented by many smaller (black and white) pixels.

In this talk we suggest new visual cryptographic schemes based on light polarization which are better than the optimal existing schemes. Then we present an ultimate scheme which does not subdivide pixels, and in which the contrast is independent of the number of shares.

Joint work with Ayal Itzkovitz

NOTE: the time is nonstandard for a security seminar.

View original page

7 August State reachability techniques in the formal verification of cryptographic protocols: state of the art and open issues / Catherine A. Meadows, Naval Research Laboratory

Room TP4, Computer Laboratory

NOTE: the time is nonstandard for a security seminar.

View original page

10 July 15:00Security and types in the java virtual machine / Mart�n Abadi, DEC Systems Research Center

Room TP4, Computer Laboratory

Java is typically compiled into an intermediate language, which we call JVML. The Java Virtual Machine interprets JVML code. Because mobile JVML code is not always trusted, a bytecode verifier enforces static constraints that prevent various dynamic errors. Given the importance of the bytecode verifier to security, its current descriptions are inadequate. We consider the use of typing rules to describe the bytecode verifier because they are more precise than prose, clearer than code, and easier to reason about than either. We explore the viability of this approach by developing a sound type system for a subset of JVML. This subset, despite its small size, is interesting because it includes JVML subroutines, a source of substantial difficulty for the bytecode verifier. (Joint work with Raymie Stata.)

NOTE: the time is nonstandard for a security seminar.

View original page

8 July 16:15A new paradigm for massively parallel random search / Adi Shamir, Weizmann Institute of Science, Israel

Room TP4, Computer Laboratory

The problem of optimizing combinatorial problems or breaking cryptographic codes led to several novel paradigms for carrying out such a massively parallel random search, including quantum and DNA computers. In this talk, the speaker will propose a new paradigm, which is based on a simple and easy to implement idea.

The speaker will use some props to demonstrate the new paradigm in real time. It's accessible to everyone, even though some familiarity with the structure of DES like schemes helps to motivate the research.

View original page

18 June 16:15Abstractions for mobile computation / Luca Cardelli, DEC Systems Research Center

Hopkinson lecture theatre, Computer Laboratory

There are two distinct areas of work in mobility: "mobile computing", concerning computation that is carried out in mobile devices, and "mobile computation", concerning mobile code that moves between devices. These distinctions are destined to vanish. We aim to describe all aspects of mobility within a single framework that encompasses mobile agents, the ambients where agents interact and the mobility of the ambients themselves.

The main difficulty with mobile computation is not in mobility per se, but in the crossing of administrative domains. Mobile programs must be equipped to navigate a hierarchy of domains, at every step obtaining authorization to move further. Therefore, at the most fundamental level we need to capture notions of locations, of mobility and of authorization to move.

We identify "mobile ambients" as a fundamental abstraction that generalizes both dynamic agents and the static domains they must cross. From a formal point of view we develop a simple but computationally powerful calculus that directly embodies domains and mobility (and little else). The calculus forms the basis of a small-language/ Java-library. We demonstrate the expressiveness of the approach by a series of examples, including showing how a notion

View original page

20 May 16:15Secure transfer of trust / Carl Ellison, CyberCash Inc.

Room TP4, Computer Laboratory

For a decade, people have viewed the binding of names to keys as the only problem. I will argue that names are almost meaningless in the context of the global Internet, and that we need rather to transfer authorisation securely from one entity to another. I will discuss two new attempts to do this, namely SDSI and SPKI.

View original page

24 March 14:15Steganography and copyright marking / Ross Anderson, University of Cambridge

Hopkinson lecture theatre, Computer Laboratory

One of the fastest growing areas of security research is steganography - hiding information in other information. One example is hiding encrypted copyright marks in digital images; the same ideas can be applied to other problems such as annotation and indexing, and to other kinds of object such as digital audio.

Research in this subject is highly interdisciplinary, and a number of people with backgrounds in graphics, signals processing and statistics have expressed interest in getting involved. I will therefore be giving a brief tutorial on the subject and outlining where the interesting areas of research appear to be.

NOTE: the time and location are nonstandard for a security seminar.

View original page

18 March The impact of dynamic linking on java security / Drew Dean, Princeton University

Room TP4, Computer Laboratory

We survey some of the major security flaws found in Java-enabled web browsers from Sun, Netscape, and Microsoft over the last 15 months. While numerous issues have been found throughout the system, the worst problems come from type safety failures in the implementations that allow an attacker to run arbitrary machine code. Several of the type safety failures can be traced to dynamic linking. We examine a formal model of dynamic linking, and find some necessary conditions for safety.

View original page

11 March Programming goofs that will hose your system security / Alec Muffett, Sun Microsystems

Room TP4, Computer Laboratory

This seminar is an illustrated and light-hearted introduction to vectors of insecurity within modern computer systems, focussing especially on Unix-like operating systems (i.e. don't expect the speaker to bang on about viruses for very long) and how it seems that the same problems come up time and again in new guises.

The presentation should be suitable for programmers, C/S students, and systems administrators of all grades, especially those with a programming bent, and will try to educate the audience away from some of the more grotesque mistakes of systems programming.

View original page

4 March Non-repudiation / Mike Roe, University of Cambridge

Room TP4, Computer Laboratory

The invention of public-key cryptography led to the idea of non-repudiation protocols, which are intended to enable the resolution of disputes about pervious protocol exchanges. Later work on non-repudiation has shown that the means by which cryptographic keys are managed is just as important as the cryptographic algorithm, and public-key cryptography is neither necessary nor sufficient.

View original page

25 February The composition of security properties / Aris Zakinthinos, University of Cambridge

Room TP4, Computer Laboratory

The ability to design and construct complex systems that enforce a security property presupposes an understanding of security properties themselves, as well as the security properties of a system that is composed from secure components. This talk will present a general theory of possibilistic security properties and of system composition for such properties.

It has been demonstrated that security properties do not fit into the saftey/liveness framework defined by Alpern and Schneider. That is, a security property can not be expressed as a property of a trace. (A trace is an ordered stream of events that can occur at the inputs and outputs of a component). However, we demonstrate that security properties can be expressed as a predicate over the set of all traces of a component that are consistent with a given low-level view of a trace.

The issue of composition with feedback has been the focus of much research, we demonstrate that the problem with feedback composition is related to the synchronization of the communications events between the various components. This allows us to provide necessary and sufficient conditions for determining when feedback composition will fail for Generalized Noninterference and for all properties stronger than Generalized Noninterference.

An understanding of what is a security property allows us to provide a method of constructing a system that satisfies a desired security property. This analysis yields a condition that can be used to determine how a property may emerge under composition.

View original page

21 February 16:00The sdsi public-key infrastructure / Butler Lampson, Microsoft Corporation

Room TP4, Computer Laboratory

SDSI is a new distributed security infrastructure, joint work by Butler Lampson and Ron Rivest. It has a simple public-key infrastructure that emphasizes linked local name spaces rather than a hierarchical global name space. SDSI makes it easy to define groups and issue group-membership certificates. Groups provide simple, clear terminology for defining access control lists and security policies.

View original page

18 February Trust / Perri 6, Demos

Room TP4, Computer Laboratory

Unless, as citizens and consumers, we believe that we can trust the government and business organisations that keep personal information about us, to treat that information confidentially and to use it only in ways that we consider to be in our interest, the ``information society'' will be a conflict-ridden, expensive and litigious affair.

In this presentation, I want to explore how we can understand better what determines trust in general. I then go on to look more specifically at how trust in connection with privacy, and I will conclude by setting out some strategies for policy in the fields of data protection, trusted third parties and independence of subject access to buttress trust and trustworthiness in the digital age.

11 February Electronic copyright management - the way ahead / Alastair Kelman, Barrister

View original page

7 February The breaking of the german lorenz world war 2 cypher: max newman's contribution and colossus. / Tony Sale, President, The Bletchley Park Trust

To mark the centenary of the birth of M.H.A. Newman (1897-1984), St John's College are hosting a talk by Tony Sale, President of the Bletchley Park Trust.

Tony Sale is widely known for his achievement in constructing a working replica of the wartime Colossus computer.

The centenary will also be marked by an exhibition about Max Newman's life and work, to be held in the College Library from February 7th until Easter.

View original page

4 February Security based on error correcting codes and its application to pay-tv / Jean-Bernard Fischer, Thomson Consumer Electronics, France

Room TP4, Computer Laboratory

We build an original cryptographic toolbox based on error-correcting codes. Having studied the difficulty of the syndrome decoding problem, we define a one-way function and a general setting for its use. Our results allow us to prove the security of Stern's authentication protocol SD; we also construct a provably secure pseudo-random generator and a very efficient and versatile keyed one-way function. These algorithms are used to provide end-to-end security for an analog pay-tv system using smart cards, similar to VideoCrypt.

View original page

28 January Mechanised proofs for a recursive authentication protocol / Larry Paulson, University of Cambridge

Room TP4, Computer Laboratory

A novel protocol has been formally analyzed using the prover Isabelle/HOL, following the inductive approach described in the speaker's earlier work. A single run of the protocol delivers session keys to any number of agents, allowing neighbours to perform mutual authentication. The basic security theorem states that session keys are correctly delivered to adjacent pairs of honest agents, even if other agents in the chain are compromised. The complexity of the protocol caused modest difficulties in the specification and proofs, but symmetries in the protocol reduced the number of separate theorems to prove.

1996

5 December Cryptographic algorithm engineering / Josef Pieprzyk, University of Wollongong, Australia

View original page

3 December Factoring and smart cards / Richard Pinch, DPMMS, University of Cambridge

Seminar Room 1, DPMMS

Dr Pinch will be talking specifically about an attack on a proposal for server-aided RSA computation using some factoring methods which go back to Lehmer.

Please note that this seminar is held at the DPMMS and not in The Computer Laboratory. Tea will be available in the DPMMS Common Room from 15:45.

26 November Information warfare and infosec - future challenges / David Ferbrache, Defence Research Agency, Malvern

View original page

19 November The gabidulin cryptosystem / Keith Gibson, Birkbeck College, London

Room TP4, Computer Laboratory

The Gabidulin Cryptosystem is a Public Key Cryptosystem (PKC) based on error correcting codes. Two versions of it have been published. After I showed how to break medium sized instances of the first version, Prof. Gabidulin agreed that his choice of system parameters was unfortunate, and produced a second set of parameters which he claimed were the most secure set possible. They turn out to be the least secure set possible, and this talk will show how to break even large instances of the second version in a matter of seconds, while at the same time showing how to choose the parameters so as to defeat my methods. Finding a secret key of an instance of the PKC can be reduced to solving an instance of an intriguing search problem of linear algebra, and it would be of great interest to know whether this problem is NP-complete, since there is no known PKC for which finding secret keys is NP-complete. One can make an intractability assumption that members of a certain family of instances of the search problem are almost always hard, and on this assumption the Gabidulin PKC is provably secure.

View original page

12 November Formal analysis of protocols using induction / Larry Paulson, Cambridge University

Room TP4, Computer Laboratory

Security protocols can be formally specified in terms of traces, which may involve many interleaved runs. Traces are defined inductively. Protocol descriptions model accidental key losses as well as attacks. The model spy can send spoof messages made up of components decrypted from previous traffic.

The approach has been implemented using the proof assistant Isabelle/HOL. Several symmetric-key protocols have been analysed, including Needham-Schroeder, Yahalom and Otway-Rees. A new attack has been discovered in a variant of Otway-Rees (already broken by Mao and Boyd). Assertions concerning secrecy and authenticity can be proved.

The approach rests on a common theory of messages, with three operators. The operator "parts" denotes the components of a set of messages. The operator "analz" denotes those parts that can be decrypted with known keys. The operator "synth" denotes those messages that can be expressed in terms of given components. The three operators enjoy many algebraic laws that are invaluable in proofs.

View original page

5 November Linking trust with network reliability - the byzantine generals strike back / Mike Burmester, Royal Holloway, London

Room TP4, Computer Laboratory

Reliability against failures from faulty links in an open network is usually assured by employing a network which has an appropriate topology. Security is assured by authenticating (and/or encrypting) the exchanged messages. For this, however, a certain degree of trust among the participating entities is needed. `Trusted paths' can be regarded as edges of a graph which we call the security graph. Usually it is assumed that this graph is almost complete, and that the entities are aware of its topology. In our scenario this is not the case.

We link trust with reliability, by analyzing the security graph. There are two models, a deterministic one in which the relative trust in a path is a Boolean expression, and a probabilistic one in which the vertices are assigned probabilities, and the trust in a path is a probability associated with the Boolean expression. We then discuss the `consensus problem' in the new scenario.

The talk is based on recent work with Yvo Desmedt. It is related to earlier work by Beth-Borcherding-Klein, Maurer and Reiter-Stubblebine, but differs in some important respects.

View original page

29 October New technologies and better privacy / Francis Aldhouse, Office of the Data Protection Registrar

Room TP4, Computer Laboratory

The Information Society will soon be upon us. Government and the private sector are looking to the new information technologies as a means of delivering goods and services more efficiently, more profitably and less expensively. Smart Cards and Active Badges to personalise network systems, Multimedia Work Space systems, e-mail communication can all be a benefit to the individual. At the same time these systems have the capability of increasingly tracking and recording our activities. They create a surveillance society by accident.

This need not be so. Technologies can be implemented to enhance and not invade personal privacy. Privacy enhancing technology should be the approach of the ethical engineer.

View original page

22 October Clock controlled sequence generators and their cryptanalysis / Bill Chambers, King's College, London

Room TP4, Computer Laboratory

There have been a number of recent developments in the design of clock-controlled shift registers, where feedback shift registers are stepped irregularly in an attempt to break up their linearity while maintaining good statistical properties. Among recent developments are the shrinking generator, and the "alleged A5" cipher. At the same time there have been a number of cryptanalytic attacks by Menicocci, by Zivkovic and by Golic, amongst others. I shall talk about basic generators such as the step-1/2 and shrinking generators and the attacks proposed by Zivkovic ("embedding") and Golic ("linearisation"). Then I shall consider the stop-go Gollmann cascades and the attacks proposed by Menicocci and by Park et al. (Here the clocking sequences are XOR'd with the outputs from the clocked registers.) The attacks proposed by Zivkovic have been extended to step-1/2 Gollmann cascades, and have been found equivalent to the "lock-in" attacks discovered earlier.

One of my big points is that most of these attacks are easily parried.

If time permits I shall mention some systems which have not yet been seriously attacked, in the hope of encouraging someone to have a go. Among these are systems with mutual clock-control, for which no very rigorous theory is known.

15 October On the elgamal family signatures and electronic cash / Wenbo Mao, Hewlett Packard Laboratories, Bristol

View original page

8 October A calculus for cryptographic protocols: the spi calculus / Andy Gordon, University of Cambridge

Room TP4, Computer Laboratory

We introduce the spi calculus, an extension of the pi calculus designed for the description and analysis of cryptographic protocols. We show how to use the spi calculus, particularly for studying authentication protocols. The pi calculus (without extension) suffices for some abstract protocols; the spi calculus enables us to consider cryptographic issues in more detail. We represent protocols as processes in the spi calculus and state their security properties in terms of coarse-grained notions of protocol equivalence.

This is joint work with Martin Abadi.

View original page

23 September Symmetric-key ciphers based on hard problems / Matt Blaze, AT&T Research

Room TP4, Computer Laboratory

A useful principle in cipher design is to reduce or at least relate closely the cryptanalysis of the cipher to some long-studied problem that is believed to be difficult. Most public-key ciphers follow this principle fairly closely (e.g., RSA is at least similar to factoring). Modern symmetric-key ciphers, on the other hand, can rarely be reduced in this way and so are frequently designed specifically to resist the various known cryptanalytic attacks. In this informal talk, we examine a simple cipher primitive, based on Feistel networks, for which recovery of its internal state given its inputs and outputs is NP-complete. We outline simple and efficient block- and stream- cipher constructions based on this primitive.

1995

View original page

28 November Quantum computation: theory and experiments / Artur Ekert, Oxford University

Room TP4, Computer Laboratory

As computers become faster they must become smaller because of the finiteness of the speed of light. The history of computer technology has involved a sequence of changes from one type of physical realisation to another - from gears to relays to valves to transistors to integrated circuits and so on. Quantum mechanics is already important in the design of microelectronic components. Soon it will be necessary to harness quantum mechanics rather than simply take it into account, and at that point it will be possible to give data processing devices new functionality. Quantum entanglement and quantum interference will make quantum computation so powerful that many problems, which are believed to be intractable on any classical computer, will become efficiently solvable. In order to illustrate the power of quantum data processing a brief discussion of Shor's quantum factoring algorithm will be provided and possibilities of its practical implementation will be discussed.

Oxford Quantum Computation pages

View original page

21 November Firewalls as a network security tool (past, present and future) / Alec Muffett, Sun Microsystems

Room TP4, Computer Laboratory

The "Firewall" - taking the (quite broad) definition of a firewall's being any device designed (in some manner) to restrict "soft" access to a network - has migrated from being a tool of the paranoid systems administrator, into being a standard part of modern network infrastructures.

This seminar will review why this situation has come about, what modern firewall architectures (both basic and advanced) look like, examine what they can/cannot accomplish, and will speculate upon the future potential of firewalls as access-security devices.

View original page

14 November Computer based fingerprint recognition / Mike Lynch, Cambridge Neurosciences Ltd

Room TP4, Computer Laboratory

Fingerprints are the most specific known characteristics of people and are able to identify them uniquely over very large databases. However the low quality of fingerprint data found for example at the scene of a crime can challenge the ability of computer based methods to exploit all the inherent information. Recent advances in pattern recognition methods such as neural networks have led to highly accurate automated systems which have found applications in police, national registration, welfare and immigration systems. The new technologies have also been applied to biometric identification problems producing new. very low cost, accurate readers for computer and physical access control.

View original pageView slides/notes

7 November Paranoia and location / Ian Jackson, Cambridge University

Room TP4, Computer Laboratory

Increasingly widespread use is being made of technologies which allow individuals to be located and tracked. Many users express significant privacy concerns. Also, when systems such as these are used to make access control decisions such as unlocking doors and teleporting computer login sessions, a higher degree of security is demanded than was often initially planned.

In this talk I will show how technology similar to the Cypherpunk remailers, but on a smaller scale, can be used to give the user complete control over the information about their location, but still let them prove where they are to parts of the infrastructure when they need to.

PostScript version of slides

View original page

31 October Engineering aspects of fast network payments / Chris Sutherland and Harry Manifavas, Cambridge University

Room TP4, Computer Laboratory

There is considerable interest at present in protocols for `electronic commerce', which is usually taken to mean paying for video on demand, worldwide web pages and access to libraries and software. It is often supposed that this is a new field, but network payment mechanisms have been around for years. We describe their history and the lessons which should be learned. We then describe a number of recent proposals, and present a digital cash proposal of our own.

View original page

24 October Using process algebra to break security protocols / Gavin Lowe, Oxford University

Room TP4, Computer Laboratory

In this talk I will describe how we may analyze security protocols using CSP and its refinement checker FDR. Briefly, we encode the protocol in CSP, produce a CSP model of the most general attacker who can interact with the protocol, and use FDR to test whether the resulting system is secure. I will show how to apply this method to the well known Needham-Schroeder Public-Key Protocol. FDR discovers an attack upon the protocol, which allows an intruder to impersonate another agent. I will then show how to adapt the protocol to prevent this attack, and briefly indicate how we may use FDR to prove that the resulting protocol is secure.

View original page

17 October A csp approach to verifying crypto protocols / Peter Ryan, Defence Research Agency, Malvern

Room TP4, Computer Laboratory

We give an overview of a research project aimed at applying formal methods to the analysis and design of cryptographic protocols, and present some results on the specification using CSP of their security properties, including authentication, key exchange/distribution, robustness, non-repudiation, integrity, confidentiality and anonymity.

We can also model communications systems, and hostile agents, in CSP, and so we can analyse whether the security properties are upheld. We describe how the CSP model-checker FDR can be used to assist, and illustrate this with examples of how our techniques found flaws in published protocols, and how they can assist in the design of new or improved protocols.

View original page

10 October Problems of stream cipher generators with mutual clock control / Bill Chambers, King's College, London

Room TP4, Computer Laboratory

The speaker has been looking at the cycle structure of an algorithm posted just over a year ago on the Internet and alleged to be the secret A5 algorithm used for confidentiality in the GSM mobile telephone system. This algorithm employs three mutually clock-controlled shift registers, and can fairly quickly enter a loop with what is essentially the shortest possible period, a number very small compared with the total number of states, or even its square root. Moreover this behaviour is robust, not being influenced by factors such as choice of primitive feedback polynomial or even clocking logic (with a proviso to be discussed). A fairly straightforward explanation for this behaviour has been found. Some ways of getting around the problem of excessively short periods are considered, as well as the behaviour of systems with different numbers of mutually clocked registers. In particular a mention is made of the wartime T52e cipher, perhaps the inspiration for "alleged A5".

View original page

22 August Extra seminar authentication in distributed systems - principles and pitfalls / Martin Abadi, DEC Systems Research Center

Old Discussion Room , Computer Laboratory

Authentication is one of the bases of security in distributed systems, yet authentication protocols often contain serious flaws. We discuss some principles for the design of authentication protocols. The principles are neither necessary nor sufficient for correctness. They are however helpful, in that adherence to them would have avoided a considerable number of published errors. We also discuss logics designed for the analysis of authentication protocols, and their relation to the informal principles.

View original page

23 June Extra seminar securing traceability of ciphertexts - towards a software key escrow system / Yvo Desmedt, University of Wisconsin

Phoenix Seminar Room (Room PO3), Computer Laboratory

The Law Enforcement Agency Field (LEAF), which is sent with the ciphertext in the Clipper system, allows the FBI (police) to trace the sender and receiver of a call. However, the design requires tamperproof hardware. We propose an alternative approach, which is based on the computational complexity of some well known problems in number theory. Its applications extend beyond key escrow.

View original page

16 June Extra seminar the rampart toolkit for building high-integrity services / Mike Reiter, Bell Labs.

Room TP4, Computer Laboratory

Rampart is a toolkit of protocols to facilitate the development of "high-integrity" services, i.e., distributed services that retain their availability and correctness despite the malicious penetration of some component servers by an attacker. At the core of Rampart are new protocols that solve several basic problems in distributed computing, including asynchronous group membership, reliable multicast (Byzantine agreement), and atomic multicast. Using these protocols, Rampart supports the development of high-integrity services via the technique of "state machine replication", and also extends this technique with a new approach to server output voting. In this talk we give an overview of Rampart, focusing primarily on its protocol architecture. We also discuss its performance in our prototype implementation, application services that we are developing, and other ongoing work.

View original pageView slides/notes

13 June Securing asynchronous transfer mode / Shaw Chuang, University of Cambridge

Room TP4, Computer Laboratory

Asynchronous transfer mode (ATM) is often described as the technology that will allow total flexibility and efficiency to be achieved in tomorrow's high speed, multi-service, multimedia networks. There has been an enormous amount of research activity in this area. However security issues for the ATM networks were much ignored in the past.

ATM networks introduce unique security concerns that must be addressed to ensure confidentiality and integrity of data. This talk will give an outline of the issues in securing the ATM networks and report on the on-going research effort in the area.

PostScript version of slides

View original page

30 May Nonrepudiation protocols / Dieter Gollmann, University of London

Room TP4, Computer Laboratory

For electronic business to mature, electronic transactions have to be made binding for sender and receiver. Digital signatures meet the original goals of non-repudiation quite adequately, but often further requirements are added, which demand the involvement of some trusted third party.

This talk will give an outline of current suggestions for non-repudiation protocols, discuss in more detail one particular protocol which tries to reduce the involvement of the trusted third party, and raise some points regarding the design and verification of such protocols.

View original pageView slides/notes

23 May Factoring for computer scientists / Robert Morris, University of Cambridge and NSA

Room TP4, Computer Laboratory

Thesis I: During the past few decades, there has been an immense amount of research on the factorization of large integers. The size of the largest numbers that can be readily and rapidly factored into primes has increased from about twenty or thirty digits a few decades ago, to perhaps one hundred digits nowadays.

Thesis II: The amount of innovation in the theory and practice of factorization in the past century or so has been disappointingly small. The result is that a competent mathematician of the mid 19th century would perceive modern factorization methods as merely minor modifications to the methods known in his own day. Yet these "minor modifications" are themselves of considerable interest.

Modern research papers in this subject are remarkably difficult to read and understand. The amount of space and time spent on deriving detailed asymptotic estimates of space and running time interfere greatly with understanding the underlying methods.

I propose to discuss factorization methods, both old and new, and in a way that will be accessible to an audience that understands just a tiny amount of number theory.

PostScript version of slides

View original pageView slides/notes

16 May Trusted third parties / Mark Lomas, University of Cambridge

Room TP4, Computer Laboratory

What is trust? When people use the term "Trusted Third Party" what exactly do they mean? Often they don't mean what they think they do.

My dictionary gives several definitions, including:

  1. a firm belief in the reliability or truth or strength etc. of a person or thing.
  2. the state of being relied upon.
I suggest that if you accept the first definition you will come to grief. In the context of computer security a better definition might be:
  • something that is capable of violating your security policy.
Systems should be designed such that Trusted Third Parties cannot avoid leaving evidence of misbehaviour.

View original page

2 May Nested signatures / Bruce Christianson, University of Hertfordshire

Room TP4, Computer Laboratory

Public key cryptosystems allow in theory the development of theft-proof capabilities which can be held in user space, passed across untrusted networks, and used without on-line authentication of the presenter, but which cannot be stolen and used successfully by an imposter, even with the collusion of certification authorities.

However, achieving this efficiently makes it desirable to refer to electronic instruments by their signatures rather than including complete texts. We discuss some key-spoofing attacks on theft-proof capabilities constructed using RSA and possible countermeasures. We conclude that PKCs would be more useful if their signature depended strongly on the public key of the certification authority.

1994

View original page

9 December A markov approach to the design of product ciphers / Luke O'Connor, Queensland University of Technology

Room TP4, Computer Laboratory

Most modern symmetric key ciphers are instances of product ciphers, which were first suggested by Shannon soon after WWII. Such ciphers, which include DES, FEAL, LOKI and IDEA, iterate a fixed round function F to produce the encryption function. This iterative structure suggests that they can be modelled as a Markov chain, whose powers correspond in some manner to the iteration of F.

In this talk we will show that two highly acclaimed attacks, differential and linear cryptanalysis, can be modelled as Markov chains and that most product ciphers will be resistant to these attacks given a sufficient number of rounds.

View original page

5 December Richard o. hundley and robert h. anderson, rand corporation security in cyberspace: an emerging challenge for society / Extra seminar

Room TP4, Computer Laboratory

Note: usual room contrary to previous announcement

As more and more human activities move into cyberspace, they become exposed to a new set of vulnerabilities, that can be exploited by a wide spectrum of "bad actors" for a variety of motives. This seminar discusses questions such as: (1) How serious are the likely threats to different segments of society, both today and in the future, from cyberspace-based attacks? (2) What are the best strategies for achieving security in cyberspace? (3) What roles and missions should various national entities be assigned? (4) Are there specific services and institutions that play such vital roles in society that their protection from cyberspace-based attacks should be of national concern? This presentation does not answer all these questions, but at least attempts to structure the discussion so that meaningful answers can be obtained.

(Please note that this seminar is not being held in the normal venue. The Phoenix seminar room is room PO3 at the west of the New Museums Site.)

View original page

29 November Computer generated evidence / Mark Lomas, Cambridge University

Room TP4, Computer Laboratory

Recent activity in the security community has concentrated on computer networks and new services they may provide. This work tends to overlook the more mundane services that we take for granted.

Computer technology has reduced the entry cost for forgers, or it may be said to reduce the skill necessary to produce convincing forgeries. To combat this I suggest that paper documents such as banknotes and cheques will need to incorporate machine-readable security information, and many documents used as evidence in courts may have to change drastically in the next few years.

View original page

15 November X/open cryptographic service model / Piers McMahon, ICL

Room TP4, Computer Laboratory

With increased requirements for cryptographic security, there is a growing number of products on the market which provide such services as encryption, digital signature, and key exchange. While it is possible to write applications which use these products, there are no vendor-neutral standards, so any applications which use cryptographic services need to bind to proprietary APIs.

This talk will give an overview of the work of the X/Open Security Working Group in defining a generic cryptographic service API to meet the requirements for application interfaces to cryptographic and key management services. It will show how the X/Open work is building from existing key management models and from extensive implementation experience; and that the agreed service model will be comprehensive, practical, applicable to both software and hardware, algorithm independent, and take account of compliance with export control laws, and controls on cryptographic usage.

View original page

1 November Pretty good privacy / Phil Zimmerman

Hopkinson Lecture Theatre, New Museum Site

Modern technology has made it easier for governments to invade the privacy of their citizens and monitor political opposition groups. But cryptography has started to provide a means of reversing certain aspects of this erosion of privacy, thus affecting the power relationship between governments and citizens.

Philip Zimmermann is the creator of PGP (Pretty Good Privacy), the worldwide de facto standard for the encryption of email. It is published as free software, and has spread like dandelion seeds blowing in the wind, fanned by the firestorm of controversy at government efforts to suppress public access to strong cryptography. This has caused conflict with the US National Security Agency's desire to restrict the use of high-quality encryption, and he is being investigated for possible violation of export controls on munitions.

View original page

19 October Robust computer security (will be held in the babbage lecture theatre) / Ross Anderson, Cambridge University

Babbage Lecture Theatre

The relationship between security and reliability is not straightforward. On the one hand, a secure system does at most X, while a reliable system does at least X; so the two concepts seem in tension. On the other hand, recent experience investigating the failure modes of automatic teller machines, satellite TV encoders, prepayment electricity meters and burglar alarms has shown that almost all real world security failures are in fact reliability failures - they result from blunders in implementation and management. After describing some of this experience, I will discuss a robustness principle which has been derived from it, and which has proved itself useful in guiding security research.

This seminar will be multicast (audio and video) on the mbone as part of our multimedia test programme. Further information is available at http://www.cl.cam.ac.uk/mbone/#cl.

View original page

18 October Implications of an analytical survey of information systems security design methods / Richard Baskerville, Binghamton University

Room TP4, Computer Laboratory

A recent survey of three generations of general information system design methods provides a framework for understanding current security design practice. The methods used may depend on checklists of controls, divide functional requirements into engineering partitions, or create abstract models of both the problem and the solution. An analysis of this survey reveals that security methods lag behind general systems development methods, and that many general methods fail to consider security specifications rigorously. These findings suggest that more general software engineering techniques cannot succeed without explicit security considerations.

View original page

8 June Factoring rsa-129 / Paul Leyland, University of Oxford

Hopkinson Lecture Theatre, New Museums Site, Pembroke Street, Cambridge

In August 1977, Scientific American published a description of the newly-invented RSA public key cryptosystem. The inventors, Rivest, Shamir and Adleman, offered a $100 prize to the first person or group to break an implementation by factoring a 129-digit integer.

In this talk, I will describe how RSA-129 was factored by a collaboration of hundreds of workers spread around the world. I will concentrate mostly on the resource-management and organizational problems (rather than the number theory) behind what is probably the largest single computation ever performed.

View original page

1 June Factoring rsa-129 (postponed until next week) / Paul Leyland, University of Oxford

Hopkinson Lecture Theatre, New Museums Site, Pembroke Street, Cambridge

In August 1977, Scientific American published a description of the newly-invented RSA public key cryptosystem. The inventors, Rivest, Shamir and Adleman, offered a $100 prize to the first person or group to break an implementation by factoring a 129-digit integer.

In this talk, I will describe how RSA-129 was factored by a collaboration of hundreds of workers spread around the world. I will concentrate mostly on the resource-management and organizational problems (rather than the number theory) behind what is probably the largest single computation ever performed.

View original page

24 May Integrating security in inter-domain routing protocols / John Crowcroft, University of London

Room TP4, Computer Laboratory (should also be multicast live over SuperJANET)

Network routing protocols work in a vulnerable environment. Unless protected by appropriate security measures, their operation can be easily subverted by intruders capable of modifying, deleting or adding false information in routing updates. This paper analyses threats to the secure operation of inter-domain routing protocols, and proposes various counter measures to make these protocol secure against external threats.

View original page

17 May A test suite for random number generators / Jonathan Hart, University of Cambridge

Room TP4, Computer Laboratory

Many applications, such as key generation in cryptography, rely on sources of unpredictable behaviour, which typically take the form of a random or pseudorandom number generator. It is of importance to designers and users to be able to evaluate the effectiveness of these devices.

The talk will cover the evaluation techniques implemented by a software suite we have written. A variety of statistical tests will be discussed, together with more specific methods such as linear complexity and the spectral test. Other tests, including sequence complexity and the binary derivative, will be mentioned in connection with the commercially available Crypt-XS package.

Some theoretical background will also be covered, including Yao's theorem which provides justification for a statistical approach, and the work of various authors on linear complexity.

View original page

10 May Wiretapping, forgery and plausible deniability / Mike Roe, University of Cambridge

Room TP4, Computer Laboratory

The purpose of any security service is either to ensure that an event happens or to prevent an event happening (liveness or safety). Software reliablity is typically concerned with events that are universally agreed to be beneficial or harmful. On the other hand, computer security is typically concerned with events that are beneficial to some persons while harming others.

It follows that whether a computer security service is desirable or not depends upon who you are, and how you are effected by the events that it causes or prevents.

Traditionally, research interest has been focused on the services known as confidentiality, integrity and non-repudiation, and has neglected the converse services of wiretapping, forgery and plausible deniability.

Recent proposals for national cryptographic infrastructures are attempting to redress this historical imbalance. We will describe some possible protocols for achieving these new services, both with and without the use of trusted third parties.

View original page

3 May Key management / Fred Piper, University of London

Room TP4, Computer Laboratory

Key management is undoubtedly one of the most important aspects of any cryptographic system. The skill of the designers who produce algorithms to withstand sophisticated cryptanalytic attacks is completely wasted if keys can be obtained by much simpler means such as seeing them displayed on a screen.

In this seminar we will present a low-level discussion on some of the basic aspects of key management; generation, distribution, storage, change and destruction. The discussion will encompass both symmetric and asymmetric systems.

For a symmetric system all keys must be secret and the distribution of those keys, particularly during initialisation, is a major headache. The introduction of asymmetric systems removed the requirement that all keys must be secret and thus changed the nature of the key distribution problem. However, for asymetric systems public keys must be authentic and must have other specific properties. These requirements create new problems.

Generic key hierarchical systems will be discussed and, possibly, some schemes designed to solve specific problems eg the transation key system for EFTPOS. The relevant standards will also be mentioned.

26 April Extending the ban logic to secrecy / Ian Jackson, University of Cambridge

View original page

20 April A new technique for biometric recognition / John Daugman, University of Cambridge

Babbage Lecture Theatre, New Museums Site

Samples from stochastic signals with sufficient complexity need reveal only very little agreement in order to reject the hypothesis that they arise from independent sources. The failure of a statistical test of independence can thereby serve as a basis for recognising signal sources if they possess enough degrees of freedom. Combinatorial complexity of stochastic detail can lead to similarity metrics having binomial type distributions, and this allows decisions about the identity of signal sources to be made with astronomic confidence levels.

I will describe an application of these statistical pattern recognition principles in a system for biometric personal identification that analyses the random texture visible at some distance in the iris of a person's eye. There is little genetic penetrance in the phenotypic description of the iris, beyond colour, form and physiology. Since its detailed morphogenesis depends on the initial conditions in the embryonic mesoderm from which it develops, the iris texture itself is stochastic, if not chaotic. The recognition algorithm demodulates the iris texture with complex valued 2D Gabor wavelets, and coarsely quantises the resulting phasors to build a 256 byte `iris code' whose entropy is roughly 173 bits. Ergodicity and commensurability facilitate extremely rapid comparisons of entire iris codes using 32-bit XOR instructions. Recognition decisions are made by exhaustive database searches at the rate of about 10,000 persons per second.

1 March Clock controlled sequence generators / Bill Chambers, King's College, London

View original page

22 February Another attack on des / Donald Davies

Room TP4, Computer Laboratory

The expansion permutation in DES duplicates two bits between each neighbouring pair of S-boxes. Before they enter the S-boxes, bits of key are added to them (mod 2 by bit). The difference between plain and cipher is a sum of 8 outputs of S-boxes and can reveal key information.

This attack can give 16 bits of key information but it takes a lot of samples for a reliable result. There could just possibly be applications where it mattered.

View original page

15 February Robustness in protocols and algorithms / Ross Anderson, University of Cambridge Computer Laboratory

Room TP4, Computer Laboratory

The ease with which design mistakes are made in computer security systems in general, and in cryptography in particular, lead us to ask whether it is possible to design systems whose security properties are robust, in the sense that they can cope with minor errors of design, implementation and operation.

However, when we look at other engineering disciplines, we see that the nature of robustness properties varies quite widely. Most civil engineering mistakes cause structures to be slightly weaker than planned, and so bridges are built to be several times stronger than they need to be; aicraft designers on the other hand duplicate critical components such as engines, instruments and pilots. We will argue that there is a comparable organising principle for computer and communications security systems.

8 February Database security / Simon Wiseman, Defence Research Agency, Malvern

View original page

1 February Detecting denial of service attacks / Roger Needham, University of Cambridge 1994

Room TP4, Computer Laboratory

Denial of Service is a cinderella subject in security, since it is often supposed that there is not a lot that can usefully be said about it. There is very little literature in comparison with the huge amount published on confidentiality and authenticity. Some recent consulting work shows that there are things that can be said, and I shall present some of them using a suitably sanitised example.

18 January A new attack on algebraic coded cryptosystems / Keith Gibson, Birkbeck College, London

View original page

12 January How to steal a car / John Gordon, University of Hertfordshire and Concept Labs

Babbage Lecture Theatre, New Museums Site, Pembroke Street

Cars are stolen electronically. Widespread adoption of remote locking devices - electronic key fobs - has given rise to a new type of car theft. These devices send electronic signals which can be recorded and replayed using a so-called grabber, and this received considerable press attention following a recent court case. The seminar will describe the current state of affairs and how cryptographic techniques are leading to more theft-proof vehicles.

1993

25 February Computer security standards / Mike Roe, Computer Laboratory

View original page

18 February Open system security / John Bull, ANSA

Room TP4, Computer Laboratory

Distributed computer networks of unlimited extensibility and scale will evolve over the next decade. On behalf of their users, a huge variety of computers systems will offer, request and exchange services in an immense international open trading enterprise where there can be no central authority and no ubiquitous security infrastructure. This seminar will present a view that to meet the challenge we must take a radically different approach to computer security. It will argue for a change of emphasis, away from enforcement of administrator imposed security policies through an infrastructure, towards a regime of self-defence by individual service providers. It will discuss the policy nuances, required mechanisms and protocol design consequences that would follow from this change of direction.

View original page

11 February Combinatorial authentication / Ross Anderson, Computer Laboratory

Room TP4, Computer Laboratory

A number of digital signature schemes have been proposed (Fiat-Shamir, Micali-Shamir and Bos-Chaum) which work by using a hash function of the message to key a combinatorial subset product. We find that such schemes need to incorporate a certain amount of freshness if they are to be secure, and we explain and quantify this.

When we consider the properties that a hash function must possess in order to be useful in this kind of application, we find that, contrary to previous belief, collision freedom is not a sufficient condition for hash functions. In fact, given any collision free hash function, we construct a derived function which is also collision free but cryptographically useless. In the process, we settle an outstanding conjecture of Okamoto that correlation freedom is a strictly stronger property than collision freedom.

View original page

28 January Defining confidentiality by refinement / Jeremy Jacob, St Peter's College, Oxford

Room TP4, Computer Laboratory

The purpose of this talk is to give a formal definition of the term "Confidentiality Property". On the way, formal definitions will be given of related terms such as "Functionality property", "Cheapness property" and "Prestige property" (the last two being pedagogic toys).

The definitions of those terms is given in terms of a "refinement relation". Refinement relations are of interest as they capture the proof obligations for showing program correctness; and so our definitions are directly related to correctness concerns. The space of refinement relations is modelled as a set of pre-orders (quasi-orders).

View original page

21 January Complexity questions in cryptography / Dominic Welsh, Merton College, Oxford

Room TP4, Computer Laboratory

This talk will be a survey of some of the advances made recently on the frontier between complexity and cryptography. In particular, it will discuss the role of uniqueness and the importance of randomness in this area.

It will be self-contained and assume only a basic knowledge of complexity concepts, so should be accessible to nonspecialists as well as of interest to experts.

View original page

14 January Threshold cryptosystems / Yvo Desmedt, University of Wisonsin at Milwaukee

Room TP4, Computer Laboratory

Often the power to use a cryptosystem has to be shared. In threshold schemes, k-out-of-l have the power to generate a secret key (while less than k have not). However threshold schemes cannot be used directly in many applications, such as threshold signatures in which k-out-of-l have to co-sign a message. For a normal threshold scheme would require the shareholders to send their shares to a trusted person who would sign for them. But the use of such a trusted person violates the main point of threshold signatures!

The first concepts of threshold cryptography were independently introduced by Boyd, Croft-Harris and Desmedt; and schemes for threshold decryption, threshold authentication and threshold signature have been presented recently. At Crypto '92, Micali argued that the use of verifiable threshold schemes would facilitate the enforcement of court ordered wiretapping.

We first overview the research in the field and then present a threshold signature scheme which is as secure as RSA. This has the property that a court does not need to order the disclosure of a master key, but only the decryption of individual messages.

1992

View original page

3 December Polymorphic viruses and means to describe them / Dr Jan Hruska, Sophos Ltd.

Room TP4, Computer Laboratory

Recent developments in computer virus writing have caused a major rethink on strategies used by anti-virus software to detect virus code. Apart from the constantly increasing requirement for storage of information which describes each virus, the increased numbers of polymorphic (encrypting, self-mutating) viruses has led to the deleopment of algorithmic languages which describe virus code.

The lecture will include live demonstrations of computer viruses.

View original page

12 November Password security in distributed systems / Dr Mark Lomas, University of Cambridge Computer Laboratory

Room TP4, Computer Laboratory

The `Internet Worm' exploited poorly chosen passwords to gain access to a very large number of computers; the UNIX password system is known to be weak against guessing attacks. It is less well known that many, if not most, authentication protocols are also subject to similar guessing attacks.

Several years ago a group of us (Li Gong, Jerry Saltzer, Roger Needham, and myself) proposed a technical solution to this problem. Our solution has been adopted by some, but not all, designers of cryptographic protocols.

I intend to demonstrate how one might break the schemes that did not adopt our suggestions. In particular I shall show how to break `C2 secure' SunOS, NFS, and Kerberos. I'll also show how these schemes may be changed to protect against such attacks.

View original page

3 November Security of tcp/ip / Prof. James Davenport, University of Bath

Discussion Room, Computer Laboratory

The phrase "TCP/IP" is used to cover a multitude of independent protocols and mechanisms, some of which are Internet standards and others of which are vendor-specific or just "happen to be there", and which were generally designed with functionality more important than security. We will examine the various sub-families, their evolution and background assumptions, and hence deduce the security assumptions which, implicitly, underly them, and the weaknesses from which they suffer.

The speaker has been a consultant on TCP/IP for the Janet system, and has found and blocked several loopholes in TCP/IP suite.

View original page

27 October Authentication standards / Chris Mitchell

Discussion Room, Computer Laboratory

Authentication protocols have been the subject of academic interest for some 15 years, following the seminal paper of Needham and Schroeder. While such protocols have been widely discussed and implemented, and indeed international standards for these protocols have been, and are being, developed, the explicit objectives of an authentication protocol have rarely been subjected to critical examination. Even those formal logics devised to examine these protocols often partially dodge the issue of the objectives of an authentication protocol, except typically to deal with the establishment of shared secret keys.

In this seminar, the latest ISO draft standards covering authentication protocols are considered in the context of a discussion of the objectives of these protocols. This discussion provides useful insights into the applicability of protocols for particular applications.

22 October Proving the security of financial systems / Ross Anderson

14 October Computer crime / Alistair Kelman QC

Wednesday