Security Group
2025 seminars
03 June 14:00Safer (cyber)spaces: Reconfiguring Digital Security Towards Solidarity / Dr Julia Slupska, Ofcom
Webinar (via Zoom online)
Misogyny and domestic abuse are old problems, but tech companies have enabled these harms to grow and proliferate on their platforms in new forms, such as social media harassment, cyberstalking, and deepfake intimate image abuse. This talk will summarise my work to build better systems for survivors of online gender-based violence across academia, advocacy, and policy. In my DPhil, I argued that due to its engineering focus on defending networks and information, cybersecurity neglects the human element, and particularly differences in power and relationships between humans that produce (in)security. I developed a new method, participatory threat modelling, which brings marginalised people and civil society groups into the process of systematically assessing digital security threats. As a part of the work, I co-founded a research collective named re:configure, which ran feminist digital security workshops with groups such as survivors of intimate image abuse, environmental activists, and migrant domestic workers. After my PhD, I worked at the online safety charity Glitch, I managed the delivery of and co-authored a quantitative research study on digital misogynoir (hate directed at Black women) across multiple tech platforms, including Meta, X/Twitter, and 4chan. I am now working as Senior Associate at Ofcom, the regulator in charge of implementing the UK’s Online Safety Act, to put this evidence into guidance for industry on online violence against women and girls.
Zoom link: https://cam-ac-uk.zoom.us/j/87951267899?pwd=96gCfOc00Q3OqG6MiDpl6pPQ9hhiwk.1
27 May 14:00007: End-to-End Encrypted Audio Calls via Blind Audio Mixing / Emad Heydari Beni and Lode Hoste, Nokia Bell Labs
Webinar & LT2, Computer Laboratory, William Gates Building.
End-to-end encryption (E2EE) for messaging has become an industry standard and is widely implemented in many applications. However, applying E2EE to audio calls, particularly group calls, remains a complex challenge. Unlike text messages, audio calls involve capturing audio streams from each participant, which must be combined into a single, coherent audio stream that all participants can hear. This is known as audio mixing. In a non-E2EE system, the audio is mixed by a central server, and the result is sent to each participant. In contrast, in an E2EE system, each audio stream must be encrypted locally and sent to every participant in the group call. This method presents major challenges with respect to network overhead, audio synchronization and limitation on applying audio enhancement techniques.
In this talk, we present a new approach using Fully Homomorphic Encryption (FHE), which enables end-to-end encryption for group voice calls. Concretely, we introduce blind audio mixing and an FHE-compatible compression technique.
Zoom link: https://www.google.com/url?q=https://cam-ac-uk.zoom.us/j/83912370794?pwd%3DKOjLaKTwbRWvlsSjiLSgpTqIkEs8xI.1
20 May 14:00Two Tales of End-to-End Encryption / Sunoo Park, NYU
Webinar & LT2, Computer Laboratory, William Gates Building.
End-to-end encryption has become the gold standard for securing communications, bringing strong privacy guarantees to billions of users worldwide. My talk will discuss two potential avenues through which the security protections of end-to-end encryption might be undermined, concerning government circumvention and commercial circumvention respectively. First, I examine a recent case of a large-scale law enforcement hack of an encrypted communication network called Encrochat in Europe. Second, I raise security concerns related to the integration of AI features across apps and devices—a growing trend, following remarkable recent advances in generative artificial intelligence.
Zoom link: https://cam-ac-uk.zoom.us/j/82651240291?pwd=ffMwua3b0yUz0C2TFaIxzra2aplAwr.1
13 May 14:00Harnessing the Power, Managing the Risk: The Future of AI, Data, and Cybersecurity / Evan Kotsovinos, Vice President Privacy, Safety, Security, Google
Webinar & LT2, Computer Laboratory, William Gates Building.
AI has the potential to transform learning, creativity, and productivity. It represents a profound platform shift in technology – like the internet, or the shift to mobile. As with any transformational shift, AI will be (and already is) used for good and for malicious purposes. This talk will cover some of the practical examples of threats we see and risks we are preparing for, but also the incredible opportunities available to us to use AI to change the game in cybersecurity and ultimately make the world safer.
Evan Kotsovinos is the Vice President of the Privacy, Safety and Security team at Google, which is the central engineering function that builds and scales the foundational technology that keeps billions of people safe online. His team is focused on cybersecurity threat detection, analysis and counterabuse, building advanced AI/ML security technologies, and protecting privacy, identity and data. He continues to work on planet-scale solutions that protect the security and privacy of Google’s billions of users around the world.
Evan was previously Global Head of Infrastructure at American Express, responsible for the company’s data centers, networks, compute, storage, hybrid cloud, data infrastructure, and SRE teams. Prior to that he served as Asia CIO at Morgan Stanley, where he managed all technology services and resources in the region.
Evan began his career as a Senior Research Scientist with Deutsche Telekom Laboratories and is a recognized leader in cloud computing, having led the team that developed one of the first cloud computing systems in the early 2000s at the University of Cambridge. He holds a Doctorate in Computer Science from the University of Cambridge and a Master’s in Finance from London Business School.
Zoom link: https://cam-ac-uk.zoom.us/j/86098869386
06 May 14:00Kintsugi: A Decentralized E2EE Key Recovery Protocol / Emilie Ma
Webinar (via Zoom online)
Key recovery is the process of regaining access to end-to-end encrypted data after the user has lost their device, but still has their password. Existing E2EE key recovery methods, such as those deployed by Signal and WhatsApp, centralize trust by relying on servers administered by a single provider.
In this talk, we share our recent work on Kintsugi, a decentralized recovery protocol that distributes trust over multiple recovery nodes. This talk will cover how we developed Kintsugi and its unique security properties, as well as compare it to prior E2EE key recovery work.
Zoom link: https://cam-ac-uk.zoom.us/j/84072830114?pwd=3zxgIngk7X6zSPiEM6SMsziQWBW07y.1
18 March 14:00Fighting cancer and fake news: A battle against misinformation / Alice Hutchings (University of Cambridge)
Webinar & LT2, Computer Laboratory, William Gates Building.
Cancer-related medical misinformation is a wicked problem, deeply embedded in social, cultural, and technical systems. It represents a deliberate and profit-driven phenomenon, perpetuated by bad actors exploiting online platforms and societal vulnerabilities. Cancer misinformation thrives on information asymmetry, where creators hold an informational advantage over their audience. Bad actors exploit this imbalance by distorting facts and concealing critical context, preying on knowledge gaps and fear and uncertainty following a diagnosis. Drawing from signalling theory, we will explore how misinformation creators mimic trustworthy signals like expertise (e.g., impersonating professionals), consensus (e.g., fake reviews), and familiarity (e.g., mimicking reputable formats), manipulating audiences into accepting their claims as credible. These individuals and organisations manipulate trust, emotions, and gaps in knowledge, fostering harmful behaviours and undermining public health efforts. Social media's monetisation systems incentivise engagement over accuracy, perpetuating a vicious cycle of distrust in conventional medicine. Cancer misinformation leads to devastating outcomes, including delays in treatment, financial exploitation, and diminished trust in healthcare systems. Understanding medical misinformation tactics and the structural mechanisms enabling misinformation is critical to devising effective interventions that address its root causes. This talk explores the roots, proliferation, and impacts of cancer-related misinformation, focusing on its mimicking of trust signals, dissemination through digital ecosystems, and profound consequences for patients and caregivers.
Zoom link:
https://cam-ac-uk.zoom.us/j/83142055830?pwd=tVRAsZyuo4LMFR0RinDBY8YHmpwqTY.1
14 March 14:00‘To Report or Not to Report: is that the Question?’ Exploring Young People's Reporting Practices of Illicit Drug Ads on Social Media / Ashly Fuller, University College London
Webinar & GN06, Computer Laboratory, William Gates Building.
Better algorithms mean less illegal content on social media. To improve these algorithms, users need to report such content, yet they often do not. This talk will explore young people’s attitudes and practices around reporting a specific online harm: illicit drug advertisements.
A survey of UK students <notextile>(13–18)</notextile> examined their reporting practices and tested different messages to encourage reporting. Surprisingly, none were effective, highlighting deeper challenges in the reporting process. This calls into question whether user reporting is the best way to reduce harmful content and highlights the need for a balanced approach that combines proactive detection with user engagement.
04 March 14:00Towards a Faster Finality Protocol for Ethereum / Luca Zanolini, Ethereum Foundation
Webinar & GN06, Computer Laboratory, William Gates Building.
Ethereum's Gasper consensus protocol typically requires 64 to 95 slots-the units of time during which a new chain extending the previous one by one block is proposed and voted-to finalize, even under ideal conditions with synchrony and honest validators. This exposes a significant portion of the blockchain to potential reorganizations during changes in network conditions, such as periods of asynchrony.
In this talk, I will introduce 3SF, a novel consensus protocol that addresses these limitations. With 3SF, finality is achieved within just three slots after a proposal, drastically reducing the exposure to reorganizations. This presentation will explore the motivation, design, and implications of 3SF, offering a new perspective on the future of Ethereum's consensus protocol.
Paper: https://arxiv.org/abs/2411.00558
Zoom link: https://cam-ac-uk.zoom.us/j/82398112798?pwd=vg2ZZm8mdSBW8A8mkkaMOOqSaFEgzw.1
Meeting ID: 823 9811 2798
Passcode: 784044
28 February 14:00Challenges and tensions around online safety, security and privacy / Dan Sexton, Internet Watch Foundation
Webinar & FW11, Computer Laboratory, William Gates Building.
Dan Sexton is Chief Technology Officer at the Internet Watch Foundation, a not-for-profit whose vision is to create an internet free from child sexual abuse and is a safe place for children and adults to use around the world.
25 February 14:00Downvoted to Oblivion: Censorship in Online, LGBTQ+ Communities / Kyle Beadle, UCL
Webinar & FW11, Computer Laboratory, William Gates Building.
Online communities enable surveillance among <notextile>LGBTQ+</notextile> users despite being used as safe spaces where users can explore their identity free from most online harms. Coercion, doxxing, and public outing are all examples of privacy violations faced. These are experienced when users fail to conform to fellow community members’ expected language and expressions of gender identity and sexuality. Current moderation systems fail to capture this peer surveillance because of the complexity of language and unspoken rules involved. This talk will explore how surveillance is enabled as well as its effects on the censorship of gender identity/expression in online <notextile>LGBTQ+</notextile> communities.
Paper Link: https://discovery.ucl.ac.uk/id/eprint/10200690/
Zoom link:
https://cam-ac-uk.zoom.us/j/84128296595?pwd=WJoeK08vOkhNVzLyAqbwwuDYAonFQP.1
Meeting ID: 841 2829 6595
Passcode: 505923
18 February 14:00Physical-Layer Security of Satellite Communications Links / Simon Birnbach, University of Oxford
Webinar & SS03, Computer Laboratory, William Gates Building.
In recent years, building and launching satellites has become considerably cheaper, making satellite systems more accessible to an expanding user base. This accessibility has led to a diverse array of applications—such as navigation, communications, and earth observation—that depend on satellites. However, hardware limitations and operational considerations often render cryptographic solutions impractical for these systems. Furthermore, the availability of low-cost software-defined radios has made signal capture, injection, and interference attacks more attainable for a wider range of potential attackers.
Therefore, mitigations must be developed for satellites that have already been launched without adequate protections in place. This talk introduces some of our research into how satellite systems are vulnerable, as well as ways to protect these systems.
Bio:
Simon Birnbach is a Senior Research Associate and a Royal Academy of Engineering UK IC Postdoctoral Research Fellow in the Systems Security Lab of Professor Ivan Martinovic in the Department of Computer Science at the University of Oxford. He specialises in the security of cyber-physical systems, with a focus on smart home, aviation, and aerospace security.
Zoom link:
https://cam-ac-uk.zoom.us/j/87594645761?pwd=qlkBblRXyjku3I3C3mnWcCZuidMP7B.1
Meeting ID: 875 9464 5761
Passcode: 648387
11 February 14:00Designing Counter Strategies against Online Harms / Stefanie Ullmann, University of Cambridge
Webinar & FW11, Computer Laboratory, William Gates Building.
Common mitigation strategies to combat harmful speech online, such as reporting and blocking, are often insufficient as they are reactive, involve unethical human labour and impose censorship. This explores alternative counter strategies such as a quarantining tool and automated counterspeech generator. Quarantining online hate speech and disinformation like a computer virus gives power to the individual user, while a counterspeech generator is specifically designed to produce diverse counter responses to different forms of online harm. Both strategies can protect users from harm and significantly ease the burden of human counterspeakers. The talk will explore the benefits as well as current shortcomings of these strategies and discuss necessary further developments.
Join Zoom Meeting
https://cam-ac-uk.zoom.us/j/81329781369?pwd=a8JRkV6tb7LRQUL4Pa4UbxmHRcaXam.1
Meeting ID: 813 2978 1369
Passcode: 294250
04 February 14:00Researchers‘ experiences with vulnerability disclosures / Yasemin Acar, Paderborn University
Webinar & FW11, Computer Laboratory, William Gates Building.
Vulnerabilities are becoming more and more prevalent in scientific research. Researchers usually wish to publish their research and, before that, have the vulnerabilities acknowledged and fixed, contributing to a secure digital world. However, the vulnerability disclosure process is fraught with obstacles, and handling vulnerabilities is challenging as it involves several parties (vendors, companies, customers, and community). We want to shed light on the vulnerability disclosure process and develop guidelines and best practices, serving vulnerability researchers as well as the affected parties for better collaboration in disclosing and fixing vulnerabilities.
We collected more than 1900 research papers published at major scientific security conferences and analyzed how disclosures are reported, finding inconsistent reporting, as well as spotty acknowledgments and fixes by affected parties. We then conducted semi-structured interviews with 21 security researchers with a broad range of expertise who published their work at scientific security conferences and qualitatively analyzed the interviews.
We discovered that the main problem starts with even finding the proper contact to disclose. Bug bounty programs or general-purpose contact email addresses, often staffed by AI or untrained personnel, posed obstacles to timely and effective reporting of vulnerabilities.
Experiences with CERT (entities supposed to help notify affected parties and facilitate coordinated fixing of vulnerabilities) were inconsistent, some extremely positive, some disappointing. Our interviewees further talked about lawsuits and public accusations from the vendors, developers, colleagues, or even the research community. Successful disclosures often hinge on researcher experience and personal contacts, which poses personal and professional risks to newer researchers.
We're working on making our collected best practices and common pitfalls more widely known both to researchers and industry, for more cooperative disclosure experiences.
Zoom link: https://cam-ac-uk.zoom.us/j/89699287551?pwd=shaVGdAyVagZX2AvrVI9mazeKk8ssI.1
Meeting ID: 896 9928 7551
Passcode: 471680
Bio: Yasemin Acar (she/her) is a professor of computer science at Paderborn University, Germany, and a research assistant professor at The George Washington University. She focuses on human factors in computer security. Her research centers humans, their comprehension, behaviors, wishes and needs. She aims to better understand how software can enhance users’ lives without putting their data at risk. Her recent focus has been on human factors in secure development, investigating how to help software developers implement secure software development practices. Her research has shown that working with developers on these issues can resolve problems before they ever affect end users. Her research has won distinguished paper awards at IEEE Security and Privacy and USENIX Security, as well as a NSA best cyber security paper competition. Her web page: https://yaseminacar.de.
31 January 14:00Police responses to young people’s experiences of cyberstalking / Tahreem Tahir, University of Central Lancashire
Webinar & FW11, Computer Laboratory, William Gates Building.
In our digitally interconnected world, cyberstalking has become a significant concern for online users worldwide. Young people have embraced new technologies for communication, making social media apps such as Facebook, X, Instagram, Snapchat and other platforms an integral part of their lives for communicating with each other. Young people utilise digital spaces to create new connections and even initiate, sustain, and carry out part of their intimate relationships online. Consequently, technology has provided opportunities to facilitate online monitoring of others due to the proficiency and ease with which information can be obtained.
The rise of digital technologies has given perpetrators new avenues and opportunities to target victims resulting in a rise of cyberstalking. However, little work to date has explored young people’s perceptions and experiences of cyberstalking. With research consistently revealing very few cyberstalking victims choose to report their experiences to the police. There is notable research gap regarding young people’s reasons not to report cyberstalking incidents.
Guided by the power differentials between police officers and young people. This research examines police officers use of authority to regulate and influence behaviour of young people. This paper will explore some of the key issues identified in the literature review, including prevalence and variations of cyberstalking among young people, experiences and barriers to reporting to the police and other agencies. It draws on insights from interviews with young cyberstalking victims and frontline response police officers. Preliminary findings from the voices of young people indicate age bias among police officers, resulting in misguided advise on cyberstalking incidents, leading to escalated risk and lack of support. The perspectives and experiences of young people emphasise the importance of lasting changes in attitudes, policies and practices. By tackling these, the research aims to contribute to improved victims support, inform policy and refine practices within the cyberstalking sector.