Department of Computer Science and Technology

Security Group

2016 seminars

Expand all Collapse all

If you can't find a talk you are looking for on this page, try the old archives.

View original page

29 November 14:00Reversing chip design for hardware security characterization / Franck Courbon, Security group, University of Cambridge

LT2, Computer Laboratory, William Gates Building

In a more and more connected world, attacks target different payment, identification and transportation systems leading to various economic, social and societal impacts. Some of those attacks are directly based on embedded systems hardware structure. Thus, among others, attacks can profit from transistors' behavior or can aim to modify some part of an integrated circuit. Problematics behind Hardware Trojan detection, fault attacks and memory cell content extraction are addressed. Based on a 3 steps methodology, Sample preparation - Scanning Electron Microscopy (SEM) - Image processing, we depict how interesting Scanning Electron Microscopy intrinsic features are for hardware security. On one hand, after a frontside preparation down to transistors' active region, the methodology allows detecting malicious hardware modification, extracting memory contents from a type of ROM or locating individual transistors prior to a fault attack in a chip's synthesized logic. On the other hand, after a backside preparation down to transistors' tunnel oxides, the methodology allows retrieving Flash/EEPROM memory contents. The methodology is depicted with the help of practical experiments. We will particularly point out the cost, speed and efficiency advantages for such SEM based approaches.

Franck Courbon is a Post-Doctoral Research Associate in the Computer Laboratory Security team. He is currently working on integrated circuit memory contents extraction. He previously worked for Gemalto, where he has been awarded of a PhD. in partnership with the Ecole des Mines of Saint-Etienne in France.

View original page

04 November 13:15Drammer: Deterministic Rowhammer Attacks on Mobile Platforms / Kaveh Razavi, Vrije Universiteit Amsterdam

Room SS03, Computer Laboratory, William Gates Building

Recent work shows that the Rowhammer hardware bug can be used
to craft powerful attacks and completely subvert a system. However,
existing efforts either describe probabilistic (and thus unreliable)
attacks or rely on special (and often unavailable) memory management
features to place victim objects in vulnerable physical memory
locations. Moreover, prior work only targets x86 and researchers have
openly wondered whether Rowhammer attacks on other architectures, such
as ARM, are even possible.

We show that deterministic Rowhammer attacks are feasible on commodity
mobile platforms and that they cannot be mitigated by current defenses.
Rather than assuming special memory management features, our attack,
Drammer, solely relies on the predictable memory reuse patterns of
standard physical memory allocators. We implement Drammer on
Android/ARM, demonstrating the practicability of our attack, but also
discuss a generalization of our approach to other Linux-based platforms.
Furthermore, we show that traditional x86-based Rowhammer exploitation
techniques no longer work on mobile platforms and address the resulting
challenges towards practical mobile Rowhammer attacks.

To support our claims, we present the first Rowhammerbased Android root
exploit relying on no software vulnerability, and requiring no user
permissions. In addition, we present an analysis of several popular
smartphones and find that many of them are susceptible to our Drammer
attack. We conclude by discussing potential mitigation strategies and
urging our community to address the concrete threat of faulty DRAM chips
in widespread commodity platforms.

Kaveh Razavi is a security researcher at the Vrije Universiteit Amsterdam in the Netherlands. He is currently mostly interested in reliable exploitation and mitigation of hardware vulnerabilities and side-channel attacks on OS/hardware interfaces. He has previously been part of a CERT team specializing on operating system security, has worked on authentication systems of a Swiss bank, and has spent two summers in Microsoft Research building large-scale system prototypes. He holds a BSc from Sharif University of Technology, Tehran, an MSc from ETH Zurich and a PhD from Vrije Universiteit Amsterdam.

View original page

26 October 16:15End-to-end encryption: Behind the scenes / D Vasile, M Kleppmann & D Thomas - University of Cambridge Computer Laboratory

Lecture Theatre 1, Computer Laboratory

Everyone is talking about "cloud computing", a marketing term for "renting time on someone else's computers on the internet". While the cloud is great from an efficiency point of view, it is a potential security nightmare: applications have to blindly trust cloud providers that they will preserve the integrity of the data and prevent unauthorised access. Data breaches and compromises of cloud providers are a serious risk.

End-to-end encryption allows us to avoid having to blindly trust the servers. An early example is PGP/GnuPG encrypted email, which never went mainstream, but more recent secure messaging apps like WhatsApp, Signal and iMessage have shown that it is feasible for millions of people to use end-to-end encryption without being security experts.

How do these protocols actually work? In this talk, we will give a friendly introduction to secure messaging protocols — to understand the threats against which they defend, and how cryptographic operations are combined to implement those defences in the protocol. If you have ever wondered what "forward secrecy" means, how key exchange works, or how protocols can ensure you're communicating with the right person (not an impostor like a "man in the middle"), this talk will clear things up.

We will give a dramatic live performance of security protocols, guaranteed to make a dry subject interesting!

View original page

21 October 14:00Putting wireless signal security in a system security context / Wade Trappe, Rutgers University

LT2, Computer Laboratory, William Gates Building

The tetherless nature of wireless communications supports the operation of many applications and “ecosystems”, ranging from the day-to-day operation of a hospital to factory automation. Unfortunately, at this instant, there are many signals in the wireless ether that have no security baked in and for which the security/privacy implications are dire. In many cases, these wireless technologies are not even thought of by an organization’s security administrators. We will examine several examples of wireless signals that are prevalent and leaking sensitive information. Often, once a security analyst is aware of these problems, the solutions are simple, while in many other cases these problems require new techniques that complement higher-layer “cryptographic” tools—in short, to secure the signals themselves. Towards this objective, there has been interest recently in applying the principles of information theory and signal processing to
develop a suite of physical layer security mechanisms. Although the community has made progress in the theory of securing the physical layer, there are many important issues that must be addressed if physical layer
security is ever to be adopted by real and practical systems. In this talk, we briefly review several different flavors of physical layer security, and
then examine the major hurdles that need to be addressed if physical layer security is to become adopted in practice. We will identify some philosophical questions related to how one places physical layer security in
the context of a system’s security, and outline opportunities for applying physical layer security to real systems, especially if we can overcome the
challenges we've outlined.

View original page

18 October 14:00The Million-Key Question : Investigating the Origins of RSA Public Keys (Best Paper Award @ USENIX Security 2016) / Petr Svenda, Masaryk University, Brno, Czech Republic

LT2, Computer Laboratory, William Gates Building

Can bits of an RSA public key leak information about
design and implementation choices such as the prime generation
algorithm? We analysed over 60 million freshly generated key pairs from
22 open- and closed-source libraries and from 16 different smartcards,
revealing significant leakage. The bias introduced by different choices
is sufficiently large to classify a probable library or smartcard with
high accuracy based only on the values of public keys. Such a
classification can be used to decrease the anonymity set of users of
anonymous mailers or operators of linked Tor hidden services, to quickly
detect keys from the same vulnerable library or to verify a claim of use
of secure hardware by a remote party. The classification of the key
origins of more than 10 million RSA-based IPv4 TLS keys and 1.4 million
PGP keys also provides an independent estimation of the libraries that
are most commonly used to generate the keys found on the Internet.
Our broad inspection also provides both sanity check and deep insight
regarding which of the recommendations for RSA key pair generation are
followed in practice, including closed-source libraries and smartcards.

The talk will be based on Usenix Security 2016 paper and will also
provide fresh details from our continuous analysis of more libraries and
smartcards we perform after the conference itself.

View original page

11 October 14:00Semantics derived automatically from language corpora necessarily contain human biases / Arvind Narayanan, Princeton University

LT2, Computer Laboratory, William Gates Building

Joint work with Aylin Caliskan-Islam and Joanna J. Bryson

Artificial intelligence and machine learning are in a period of astounding growth. However, there are concerns that these technologies may be used, either with or without intention, to perpetuate the prejudice and unfairness that unfortunately characterizes many human institutions. Here we show for the first time that human-like semantic biases result from the application of standard machine learning to ordinary language---the same sort of language humans are exposed to every day. We replicate a spectrum of standard human biases as exposed by the Implicit Association Test and other well-known psychological studies. We replicate these using a widely used, purely statistical machine-learning model---namely, the GloVe word embedding---trained on a corpus of text from the Web. Our results indicate that language itself contains recoverable and accurate imprints of our historic biases, whether these are morally neutral as towards insects or flowers, problematic as towards race or gender, or even simply veridical, reflecting the status quo for the distribution of gender with respect to careers or first names. These regularities are captured by machine learning along with the rest of semantics. In addition to our empirical findings concerning language, we also contribute new methods for evaluating bias in text, the Word Embedding Association Test (WEAT) and the Word Embedding Factual Association Test (WEFAT). Our results have implications not only for AI and machine learning, but also for the fields of psychology, sociology, and human ethics, since they raise the possibility that mere exposure to everyday language can account for the biases we replicate here.

Link to paper:

Arvind Narayanan is an Assistant Professor of Computer Science at Princeton. He leads the Princeton Web Transparency and Accountability Project to uncover how companies collect and use our personal information. Narayanan also leads a research team investigating the security, anonymity, and stability of cryptocurrencies as well as novel applications of blockchains. He co-created a Massive Open Online Course as well as a textbook on Bitcoin and cryptocurrency technologies. His doctoral research showed the fundamental limits of de-identification, for which he received the Privacy Enhancing Technologies Award.

Narayanan is an affiliated faculty member at the Center for Information Technology Policy at Princeton and an affiliate scholar at Stanford Law School's Center for Internet and Society.

View original page

02 August 14:45Design for Security Test against Fault Injection Attack, and Fast Test with Compressive Sensing / Prof Huiyun Li, Shenzhen Institutes of Advanced Technology

Lecture Theatre 1, Computer Laboratory

Abstract not available

View original page

21 June 14:00Cyberinsurance: good for your company, bad for your country? / Fabio Massacci - University of Trento

Room FW26, Computer Laboratory, William Gates Building

'Cyberinsurance' is a broad industry term indicating a corporate liability insurance covering damages due to security breaches of the IT corporate infrastructure. It is a booming market that raises significant expectations: both policy makers (e.g. the UK Paymaster General and the US Senate Committee on Security), and cyber experts (e.g. Bruce Schneier) have heralded it as a mechanism for efficiently valuing the cost of cyber attacks and to act as an effective substitute for government action. Whilst the effect of purchasing insurance on the behavior of individuals or firms has been studied for more than four decades, the unique, adaptive characteristics of cyber attacks make past findings not necessarily applicable.

In this talk I will illustrate a general economic model of heterogeneous firms, making risk averse decisions facing losses from cyber attacks conducted by strategic adversaries in a Cournot competition. We demonstrate that whilst the presence of actuarially fair insurance increases the aggregate utility of target firms, the presence of insurance does *not* necessarily increase the security expenditures wrt those mandated by a benevolent social planner. Furthermore, we show that when insurance is provided by a
monopolist insurer mandating firms security expenditure (as it has been proposed) the aggregate security expenditure is predicted to fall
dramatically (and the number of attackers to increase). In other words, delegating to cyberinsurers the policy maker role of regulating security expenditures might yield a digital tragedy of the commons.

Joint work with Julian Williams (Durham) and Joe Swierzbinski (Aberdeen)

Fabio Massacci is a professor at the University of Trento (IT). He has a Ph.D. in Computing from the University of Rome La Sapienza in 1998. In his career he has visited Cambridge (UK), Toulouse (FR) and Siena (IT). He has published [105,111,197,203,308] articles in peer reviewed journals and conferences and his h-index is [14,22,36] depending on your favorite bibliographic database. In 2015 he received the IEEE Requirements Engineering '10 years most influential paper award' for his research on security requirements engineering. He was the European Coordinator of the project SECONOMICS ( on socio-economic aspects of security (See our paper with UK National Grid in the May'16 issue of IEEE Security & Privacy). Part of the ideas behind this research has also been incorporated by the Common Vulnerability Scoring Standard (CVSS) v3, just released in June 2015. He is now working on empirical methods for security and vulnerability risk assessment (e.g. are all these cyber security standards actually useful?).

Personal web site: (not very much updated)
Laboratory web site:

View original page

22 March 14:00Understanding, Characterizing, and Detecting Facebook Like Farms / Dr. Emiliano De Cristofaro, Senior Lecturer (Associate Professor), University College London

LT2, Computer Laboratory, William Gates Building

As the number of likes of a Facebook page provides a measure of its seeming popularity and profitability, an underground market of services has emerged that aim to boost page likes. In this talk, we aim to shed light on the "like farms" ecosystem, presenting three sets of results.
First, we report on a honeypot-based measurement study: we analyze likes garnered using, respectively, Facebook ads and farms, and highlight that some farms seem to be operated by bots and do not really try to hide the nature of their operations, while others follow a much stealthier approach.
We then take a look at existing graph-based fraud detection algorithms (including those currently deployed by Facebook), showing that stealthy farms successfully evade detection by spreading likes over longer timespans and by liking many popular pages to mimic normal users.
Finally, we analyze features extracted from timeline posts. We find that like farm accounts tend to more often re-share content, use fewer words and poorer vocabulary, target fewer topics, and generate more (often duplicate) comments and likes compared to normal users. Using these timeline-based features, we experiment with machine learning algorithms to detect like farms accounts, obtaining appreciably high accuracy (as high as 99% precision and 97% recall).

Emiliano De Cristofaro is a Senior Lecturer at University College London (UCL). Prior to joining UCL in 2013, he was a research scientist at PARC (a Xerox company). In 2011, he received a PhD in Networked Systems from the University of California, Irvine, advised (mostly while running on the beach) by Gene Tsudik. His research interests include privacy technologies, applied cryptography, privacy and security measurements. He has served as program co-chair of the Privacy Enhancing Technologies Symposium (PETS) in 2013 and 2014, and of the Workshop on Genome Privacy and Security (GenoPri 2015). His ugly, yet up-to-date, homepage is available at

View original page

16 February 14:00Do You See What I See? Differential Treatment of Anonymous Users / Sheharbano Khattak, University of Cambridge

LT2, Computer Laboratory, William Gates Building

The utility of anonymous communication is undermined by a growing number of websites treating users of such services in a degraded fashion. The second-class treatment of anonymous users ranges from outright rejection to limiting their access to a subset of the service’s functionality or imposing hurdles such as CAPTCHA-solving. To date, the observation of such practices has relied upon anecdotal reports catalogued by frustrated anonymity users. We present a study to methodically enumerate and characterize, in the context of Tor, the treatment of anonymous users as second-class Web citizens.

We focus on first-line blocking: at the transport layer, through reset or dropped connections; and at the application layer, through explicit blocks served from website home pages. Our study draws upon several data sources: comparisons of Internet-wide port scans from Tor exit nodes versus from control hosts; scans of the home pages of top-1,000 Alexa websites through every Tor exit; and analysis of nearly a year of historic HTTP crawls from Tor network and control hosts. We develop a methodology to distinguish censorship events from incidental failures such as those caused by packet loss or network outages, and incorporate consideration of the endemic churn in web-accessible services over both time and geographic diversity. We find clear evidence of Tor blocking on the Web, including 3.5% of the top-1,000 Alexa sites. Some blocks specifically target Tor, while others result from fate-sharing when abuse-based automated blockers trigger due to misbehaving Web sessions sharing the same exit node.

Sheharbano Khattak is a PhD student and Research Assistant in the Security and NetOS groups of the Computer Lab, University of Cambridge, under the supervision of Dr. Steven J. Murdoch, Prof. Jon Crowcroft and Prof. Ross Anderson. She is externally advised by Prof. Vern Paxson at UC Berkeley. Sheharbano is a member of Robinson College and an Honorary Cambridge Trust Scholar. She likes to work on network measurement and security in isolation, and various combinations of these. Currently she studies the effects of online censorship from a number of different aspects: how it’s done, how it can be stopped, what its effects are, and the evolving shape of the ecosystem of government/policy-based censorship in particular. Previously she worked on Intrusion Detection Systems and Internet malware with a focus on botnets.

View original page

09 February 14:00The Unfalsifiability of security claims / Cormac Herley, Microsoft Research, Redmond

LT2, Computer Laboratory, William Gates Building

There is an inherent asymmetry in computer security: things can be declared insecure by observation, but not the reverse; there is no test that allows us to declare an arbitrary system or technique secure. We show that this implies that claims of necessary conditions for security (and sufficient conditions for insecurity) are unfalsifiable (or untestable). This in turn implies an asymmetry in self-correction: while the claim that countermeasures are sufficient can always be refuted, the claim that they are necessary cannot. Thus, the response to new information can only be to ratchet upward: newly observed or speculated attack capabilities can argue a countermeasure in, but no possible observation argues one out. So errors accumulate. Further, when justifications are unfalsifiable, deciding the relative importance of defensive measures reduces to a subjective comparison of assumptions.

We argue that progress has been slow in security precisely because of a failure to identify mistakes. Bad ideas that have received no corroboration persist indefinitely and the resources they consume crowds out sensible measures to reduce harm; examples of this abound. Many things that deliver no observed benefit are declared necessary for security, either because they have defined to be so, or have been reached through logically muddled arguments.

Cormac Herley's main current interests are data analysis problems, authentication and the economics of information security. He has published widely in signal and image processing, information theory, multimedia, networking and security. He is the inventor on over 70 US patents, and has shipped technologies used by hundreds of millions of users. His research has been widely covered in outlets such as the Economist, NY Times, Washington Post, Wall St Journal, BBC, the Guardian, Wired and the Atlantic. He received the PhD degree from Columbia University, the MSEE from Georgia Tech, and the BE(Elect) from the National University of Ireland.