Department of Computer Science and Technology

Security Group

2019 seminars

Expand all Collapse all

View original page

19 November 14:00Radio Protocol Vulnerabilities in 5G Networks / Ravishankar Borgaonkar, SINTEF Digital

LT2, Computer Laboratory, William Gates Building

Cellular devices support various technical features and services for 2G, 3G, 4G and upcoming 5G networks. For example, these technical features contain physical layer throughput categories, radio protocol information, security algorithm, carrier aggregation bands and type of services such as GSM-R, Voice over LTE etc. These technical features and network services termed as 'device capabilities' and exchanged with the network during the device registration and Authentication and Key Agreement (AKA) protocols. The 3rd Generation Partnership Project (3GPP) responsible for the worldwide standardization of cellular communication networks has designed and mandated the use of the AKA protocol to protect the subscribers’ mobile services.

In this talk, we discuss several vulnerabilities discovered in cellular device authentication and registration process in 5G networks. Low-cost hardware setup, proof-of-concepts attacks to demonstrate the impact, countermeasures, and remedial actions from 3GPP/GSMA for 5G networks will also be presented.

View original page

11 November 13:00Realistic Adversarial Machine Learning / Nicholas Carlini, Google Brain

LT2, Computer Laboratory, William Gates Building

While vulnerability of machine learning is extensively studied,
most work considers security or privacy in academic settings.
This talk studies studies three aspects of recent work on
realistic adversarial machine learning, focusing on the "black
box" threat model where the adversary has only query access
to a remote classifier, but not the complete model itself.

I first study if this black-box threat model can provide apparent
robustness to adversarial examples (i.e., test time evasion
attacks). Second, I turn to the question of privacy and examine
to what extent adversaries can leak sensitive data out of
classifiers trained on private data. Finally, I ask to what extent
the black-box threat model can be relied upon, and study
"model extraction": attacks that allow an adversary to recover
the approximate parameters using only queries.

View original page

29 October 14:00Automatically Dismantling Online Dating Fraud / Matthew Edwards, Faculty of Engineering, University of Bristol

LT2, Computer Laboratory, William Gates Building

Online romance scams are a prevalent form of mass-marketing fraud in the West, and yet few studies have presented data-driven responses to this problem. In this type of scam, fraudsters craft fake profiles and manually interact with their victims. Due to the characteristics of this type of fraud, and the peculiarities of how dating sites operate, traditional detection methods (e.g., those used in spam filtering) are ineffective.

This talk will report on our investigation into the archetype of online dating profiles used in this form of fraud, including their use of demographics, profile descriptions, and images, shedding light on both the strategies deployed by scammers to appeal to victims and the implicit traits of victims themselves. Our work is presented in the context of building and evaluating a machine-learning classifier for detecting spam profiles, and elaborates on our findings from investigating areas of under-performance.

View original page

15 October 14:00Reducing Metadata Leakage from Encrypted Files and Communication / Nikitin Kirill, Decentralized/Distributed Systems Lab, EPFL

LT2, Computer Laboratory, William Gates Building

Most encrypted data formats leak metadata via their plaintext headers, such as format version, encryption schemes used, number of recipients who can decrypt the data, and even the recipients' identities. This leakage can pose security and privacy risks to users, e.g., by revealing the full membership of a group of collaborators from a single encrypted e-mail, or by enabling an eavesdropper to fingerprint the precise encryption software version and configuration the sender used. We propose that future encrypted data formats improve security and privacy hygiene by producing Padded Uniform Random Blobs or PURBs: ciphertexts indistinguishable from random bit strings to anyone without a decryption key. A PURB's content leaks nothing at all, even the application that created it, and is padded such that even its length leaks as little as possible. Encoding and decoding ciphertexts with no cleartext markers presents efficiency challenges, however. We present cryptographically agile encodings enabling legitimate recipients to decrypt a PURB efficiently, even when encrypted for any number of recipients' public keys and/or passwords, and when these public keys are from different cryptographic suites. PURBs employ Padmé, a novel padding scheme that limits information leakage via ciphertexts of maximum length M to a practical optimum of O(loglog M) bits, comparable to padding to a power of two, but with lower overhead of at most 12% and decreasing with larger payloads.

Bio:
Kirill Nikitin is a fifth-year Ph.D. student in the Decentralized/Distributed Systems lab at École polytechnique fédérale de Lausanne (EPFL) advised by Prof. Bryan Ford. His research spans the topics in Privacy, Systems Security, and Blockchains. His primary interest at the moment is on designing encryption schemes and security protocols that provide improved metadata protection.
Currently, Kirill is doing an internship in the Confidential Computing group at Microsoft Research, Cambridge.
For the detailed bio, see https://nikirill.com/.

View original page

12 September 14:00An analysis of the threats of the consumer spyware industry / Diarmaid Harkin, Alfred Deakin Institute, Deakin University

LT2, Computer Laboratory, William Gates Building

Invasive surveillance software known as “spyware” is available for general consumption, allowing everyday users the ability to place a smartphone under close surveillance. The widespread availability of spyware creates clear risks that this software can be used abusively, with many indicators that it is being used frequently in the context of domestic and family violence. This presentation reports on the findings of an Australian-based study into the threats of the consumer spyware industry. The consumer spyware industry was subjected to a market analysis and legal analysis, in addition to user analysis of spyware products and a technical analysis of a select sample of spyware.
Our investigation revealed a range of concerning findings about the threat of spyware, including (a) multiple spyware companies encourage and promote the use of spyware against intimate partners and children; (b) Android users carry a higher risk of being subject to spyware than iPhone users; (c) Technical analysis of spyware reveals that software developed within the consumer spyware industry often exhibits extremely poor data security practices, creating additional risks for the exposure of highly sensitive personal information and data. Ways of countering the threats of consumer spyware are considered.


Bio: Dr Diarmaid Harkin is an Alfred Deakin Postdoctoral Research Fellow and Senior Lecturer in Criminology at Deakin University. His current active research interests include the use of private security companies in the context of domestic violence, the consumer spyware industry, and the challenges of cyber-policing.

View original page

23 July 14:00Deploying Differential Privacy for the 2020 Census of Population and Housing / Simson L. Garfinkel, Senior Computer Scientist for Confidentiality and Data Access, U.S. Census Bureau

LT2, Computer Laboratory, William Gates Building

When differential privacy was created more than a decade ago, the motivating example was statistics published by an official statistics agency. In theory there is no difference between theory and practice, but in practice there is.

In attempting to transition differential privacy from the theory to practice, and in particular for the 2020 Census of Population and Housing, the U.S. Census Bureau has encountered many challenges unanticipated by differential privacy's creators. Many of these challenges had less to do with the mathematics of differential privacy and more to do with operational requirements that differential privacy’s creators had not discussed in their writings. These challenges included obtaining qualified personnel and a suitable computing environment, the difficulty of accounting for all uses of the confidential data, the lack of release mechanisms that align with the needs of data users, the expectation on the part of data users that they will have access to micro-data, the difficulty in setting the value of the privacy-loss parameter, ε (epsilon), and the lack of tools and trained individuals to verify the correctness of differential privacy, and push-back from same members of the data user community.

Addressing these concerns required developing a novel hierarchical algorithm that makes extensive use of a high-performance commercial optimizer; transitioning the computing environment to the cloud; educating insiders about differential privacy; engaging with academics, data users, and the general public; and redesigning both data flows inside the Census Bureau and some of the final data publications to be in line with the demands of formal privacy.

Bio:
Simson Garfinkel is the Senior Computer Scientist for Confidentiality and Data Access at the US Census Bureau. He holds seven US patents and has published more than 50 research articles in computer security and digital forensics. He is a fellow of the Association for Computing Machinery (ACM) and the Institute of Electrical and Electronics Engineers (IEEE), and a member of the National Association of Science Writers. His most recent book is The Computer Book, which features 250 chronologically arranged milestones in the history of computing. As a journalist, he has written about science, technology, and technology policy in the popular press since 1983, and has won several national journalism awards.

Garfinkel received three Bachelor of Science degrees from MIT in 1987, a Master's of Science in Journalism from Columbia University in 1988, and a Ph.D. in Computer Science from MIT in 2005.

https://simson.net/bio/

View original page

07 May 15:00rust-vmm: Building the Virtualization Stack of the Future / Andreea Florescu, Amazon

LT2, Computer Laboratory, William Gates Building

rust-vmm is an open-source project, born in January 2019 with ambitious goals:
1) design and implement a set of safe, secure and efficient virtualization building blocks,
2) reduce code duplication across existing Rust based Virtual Machine Monitors (VMMs) and
3) improve the security and quality of existing Rust based VMMs.

The purpose of rust-vmm is to provide a foundation of virtualization crates that other projects can use for rapidly developing virtualization solutions. The rust-vmm project empowers the community to focus on their product key differentiators rather than re-implementing common virtualization components like KVM API wrappers, Virtio based device models and Virtual Machine memory libraries.

In this talk we go over the fundamentals of building VMMs and why we believe Rust is the right programming language for this project. We look at how different open source projects use rust-vmm crates to build virtualization products and prototypes while outlining both the advantages and the trade-offs. In the end we try to answer the controversial question: “Does the world need more VMMs?”

View original page

07 May 14:00Firecracker microVMs - How to Securely Run Thousands of Workloads on a Single Host / Diana Popa, Amazon

LT2, Computer Laboratory, William Gates Building

Serverless computing offers increased agility and scalability for users, in part since the cloud providers own the management of the underlying infrastructure. Services such as AWS Lambda and Fargate leverage hardware virtualization to provide strong isolation between multiple tenants. Until recently, this was based on full EC2 instances, which run stateless, short-lived serverless workloads at suboptimal densities. To break out of the status quo, we developed Firecracker as a fundamental building block for multi-tenant container and function-based services.

Firecracker is a security focused virtual machine monitor written in Rust, that runs on top of KVM and is amenable to CPU and memory oversubscription. It implements a minimalist device model, boots blazingly fast, and only incurs a very low memory overhead. Firecracker is already used to run production workloads, and its development continues as an open-source project.

View original page

30 April 14:00A Promise Is A Promise: The Effect Of Commitment Devices On Computer Security Intentions / Alisa Frik, International Computer Science Institute (ICSI), University of California Berkeley

LT1, Computer Laboratory, William Gates Building

Commitment devices are a technique from behavioral economics that have been shown to mitigate the effects of present bias—the tendency to discount future risks and gains in favor of immediate gratifications. In this paper, we explore the feasibility of using commitment devices to nudge users towards complying with varying online security mitigations. Using two online experiments, with over 1,000 participants total, we offered participants the option to be reminded or to schedule security tasks in the future. We find that both reminders and commitment nudges can increase users' intentions to install security updates and enable two-factor authentication, but not to configure automatic backups. Using qualitative data, we gain insights into the reasons for postponement and how to improve future nudges. We posit that current nudges may not live up to their full potential, as the timing options offered to users may be too rigid.


Bio:
Dr. Alisa Frik is a postdoctoral researcher at the International Computer Science Institute (ICSI) and the University of California, Berkeley. She works with the Usable Security and Privacy research group, under the direction of Dr. Serge Egelman. Her current projects are about usable security for emerging healthcare technologies for older adults, increasing users’ computer security compliance by reducing present bias, personalised security nudges, bystanders' privacy, privacy concerns of domestic workers, privacy expectations regarding always listening voice assistant devices, and the effects of ad-blockers on consumers’ welfare. She has obtained a Ph.D. degree in Behavioral and Experimental Economics and Social Sciences from the University of Trento, Italy. She also spent 1 year visiting the Carnegie Mellon University, where she worked with Prof. Alessandro Acquisti. She has done research on the the impact of risk tolerance and need for control on the privacy related behaviors, and implicit measurement of privacy risk attitudes; factors affecting consumers' trust with respect to how e-commerce websites will treat their personal information and subsequent intention to purchase from such websites. Dr. Frik’s preferred methodological tools include lab and field experiments, surveys, focus groups, interviews, and participatory design.

View original page

21 March 14:00Securing Systems with Insecure Hardware / Kaveh Razavi, Systems and Network Security Group, Vrije Universiteit Amsterdam

LT2, Computer Laboratory, William Gates Building

Recent years have shown that the basic principles on which we rely on for building secure computing systems do not always hold. Memory is plagued with disturbance errors and processors leak sensitive information across security boundaries. In this seminar, I will show the true impact of these flaws in real-world systems and discuss our efforts in mitigating them.

View original page

12 March 14:00“Doing more” to keep children safe online – why the tech sector can only do so much / Andy Phippen, University of Plymouth

LT2, Computer Laboratory, William Gates Building

In this talk Prof Phippen will explore current and emerging policy positions on online safeguarding and argue that while the tech sector is placed under extreme pressure, and legislative threat, to make sure children can using online systems safely, this approach results in increasingly prohibitive technology that negatively impacts on children’s rights while not addressing the concerns that governments claim they wish to tackle (for example, preventing access to pornography, stopping young people from sending indecent images, and making sure they can’t “cyberbully”).

Using recent case studies, such as the tracking of children and the BBFC age verification of pornography services in the UK, Prof Phippen, a computer scientist by training, will take these policy positions to task using extensive empirical work with young people, to highlight how these technical approaches are doomed to fail, and distract from the need for more responsible policy making with a wider stakeholder group.

View original page

19 February 14:00Thunderclap: Exploring Vulnerabilities Operating System IOMMU Protection via DMA from Untrustworthy Peripherals / Theo Markettos, Computer Laboratory, University of Cambridge

LT2, Computer Laboratory, William Gates Building

Note: This is a practice talk for NDSS (~20min)

Direct Memory Access (DMA) attacks have been known for many years: DMA enabled I/O peripherals have complete access to the state of a computer and can fully compromise it including reading and writing all of system memory. With the popularity of Thunderbolt 3 over USB Type-C and smart internal devices, opportunities for these attacks to be performed casually with only seconds of physical access to a computer have greatly broadened. In response, commodity hardware and operating system (OS) vendors have incorporated support for Input-Ouptut Memory Management Units (IOMMUs), which impose memory protection on DMA , and are widely believed to protect against DMA attacks.

We investigate the state-of-the-art in IOMMU protection across OSes using a novel I/O-security research platform, and find that current protections fall short when faced with a functional network peripheral that uses its complex interactions with the OS for ill intent. We describe vulnerabilities in macOS, FreeBSD, and Linux, which notionally utilize IOMM Us to protect against DMA attackers. Windows uses the IOMMU only in limited cases. and it remains vulnerable. Using Thunderclap, an open-source FPGA research platform that we built, we explore new classes of OS vulnerability arising from inadequate use of the IOMMU. The complex vulnerability space for IOMMU-exposed shared memory available to DMA-enabled peripherals allows attackers to extract private data (sniffing cleartext VPN traffic) and hijack kernel control flow (launching a root shell) in seconds using devices such as USB-C projectors or power adapters. We have worked closely with OS vendors to remedy these vulnerability classes, and they have now shipped substantial feature improvements and mitigations as a result of our work.

View original page

29 January 14:00Keeping authorities honest with verifiable append-only logs, and making backdoored software updates detectable / Mustafa Al Bassam, Department of Computer Science, University College London (UCL)

LT2, Computer Laboratory, William Gates Building

Transparency is important in services that rely on authoritative information, as it provides a robust mechanism for holding authorities accountable for their actions, or making those actions publicly auditable. A number of solutions have emerged in recent years that provide public auditability in the setting of public key infrastructure (such as certificate and key transparency), and cryptocurrencies provide an example of how to allow for public verifiability in a financial setting.

In this seminar, we explore the technical mechanisms for building transparent, auditable or verifiable systems, including verifiable data structures, append-only logs and blockchains. We discuss how such systems can provide extra security assurances to users in the context of compelled software backdoors (e.g. via the Investigatory Powers Act), by enforcing transparency mechanisms in software distribution.

View original page

22 January 14:00Evil on the Internet / Richard Clayton, Computer Laboratory, University of Cambridge

LT2, Computer Laboratory, William Gates Building

This talk introduces the audience to a wide range of 'evil' websites that aim to defraud you of your money, with live examples presented to explain how they work and what is currently known about the criminals who operate them. There are many types of fraud ... you will see "phishing" sites which collect banking credentials; fake escrow sites defrauding the winners of online auctions; fake banks which hold cash for fake African dictators; Ponzi scheme websites where almost (but not quite) everyone knows that they’re a scam; booters where you can buy a DDoS attack on your game playing opponents; ecommerce shops where you should not spend your money and various other types of evil including some very cute pictures of (non-existent) puppies.

Please note that, very regrettably of course, there's so much to see that this talk doesn't fit into a one hour slot.


Bio

Dr Richard Clayton is the Director of the Cambridge Cybercrime Centre based in the Computer Laboratory. He has been studying online fraud for decades and is currently heading an initiative to not only study online wickedness in Cambridge, but to collect extremely large cybercrime datasets and make them available to other academics so that they can contribute their expertise as well.

View original page

15 January 14:00Trustworthy and Accountable Function-as-a-Service using Intel SGX / Andrew Paverd, Microsoft Research Cambridge

LT2, Computer Laboratory, William Gates Building

Function-as-a-Service (FaaS) is a recent and already very popular paradigm in cloud computing. The function provider need only specify the function to be run, usually in a high-level language like JavaScript, and the service provider orchestrates all the necessary infrastructure and software stacks. The function provider is only billed for the actual computational resources used by the function invocation. Compared to previous cloud paradigms, FaaS requires significantly more fine-grained resource measurement mechanisms, e.g. to measure compute time and memory usage of a single function invocation with sub-second accuracy. Thanks to the short duration and stateless nature of functions, and the availability of multiple open-source frameworks, FaaS enables non-traditional service providers e.g. individuals or data centers with spare capacity. However, this exacerbates the challenge of ensuring that resource consumption is measured accurately and reported reliably. It also raises the issues of ensuring computation is done correctly and minimizing the amount of information leaked to service providers.

To address these challenges, we introduce S-FaaS, the first architecture and implementation of FaaS to provide strong security and accountability guarantees backed by Intel SGX. To match the dynamic event-driven nature of FaaS, our design introduces a new key distribution enclave and a novel transitive attestation protocol. A core contribution of S-FaaS is our set of resource measurement mechanisms that securely measure compute time and memory allocations within an enclave.