Security Group
2026 seminars
03 March 14:00Self-Auditable Key Transparency at Scale / Hossein Hafezi, New York University (NYU)
Webinar & SS03, Computer Laboratory, William Gates Building.
Webinar: https://cam-ac-uk.zoom.us/j/81404415431?pwd=IutjEfY03U0bQzsMUJwnq3eJQHLH1I.1
Key transparency has gained significant attention in recent years, with major messaging applications such as WhatsApp and iMessage—serving billions of users—already deploying it. However, a major challenge in current practical deployments is the large size of audit proofs. Concretely, auditing WhatsApp’s key transparency requires computation equivalent to hashing hundreds of megabytes of data every minute, which is impractical for everyday users, especially those on mobile devices. Consequently, the auditing task is delegated to a small set of non-profit organizations, such as Cloudflare, that perform the audits on behalf of users. In essence, billions of users must trust a handful of third-party entities to verify their security, which introduces a significant trust bottleneck. Our work aims to eliminate this dependency by making the auditing process lightweight enough for users themselves to perform securely and efficiently, which benefits billions of users in privacy.
24 February 14:00Digital platforms and health disinformation / Marco Zenone, University of Ottawa
Webinar ONLY, Computer Laboratory, William Gates Building.
Webinar link: https://cam-ac-uk.zoom.us/j/86829723102?pwd=nqoBSTcVkedi6qTLL2vbRsbWla9fcz.1
This talk will discuss the role of digital platforms in providing the infrastructure through which health disinformation is advertised and circulated. The presentation will examine several case studies exploring how platforms are leveraged to promote and spread disinformation surrounding scientifically unsupported cancer treatments. Platforms examined will include Google keyword advertising, Meta (Facebook and Instagram) ads, Google Reviews, Amazon marketplaces, and crowdfunding platforms such as GoFundMe. The talk will emphasize moving away from blaming individuals and toward building accountable systems that prevent, mitigate, and respond to disinformation, while strengthening platform accountability.
Bio: Marco Zenone (he/him) is an Assistant Professor of Health Science Communication at the Faculty of Health Sciences, University of Ottawa. His research examines the spread, impact, and political economy of health misinformation and disinformation, as well as how health topics are portrayed across digital platforms and public spaces. A major focus is examining digital platforms as commercial determinants of health and how their infrastructures shape the production, circulation, and visibility of health disinformation. His research emphasizes moving beyond individual blame toward building accountable systems that prevent, mitigate, and respond to health disinformation.
10 February 14:00Abusability of Automation Apps in Intimate Partner Violence / Shirley Zhang, University of Wisconsin-Madison
Webinar & FW11, Computer Laboratory, William Gates Building.
Recording link: https://www.cl.cam.ac.uk/research/security/seminars/archive/video/2026-02-10-t243715.html
Automation apps such as iOS Shortcuts and Android Tasker enable users to "program" new functionalities, also called recipes, on their smartphones. For example, users can create recipes to set the phone to silent mode once they arrive at their office or save a note when an email is received from a particular sender. These automation apps provide convenience and can help improve productivity. However, these automation apps can also provide new avenues for abuse, particularly in the context of intimate partner violence (IPV). This paper systematically explores the potential of automation apps to be used for surveillance and harassment in IPV scenarios. We analyze four popular automation apps—iOS Shortcuts, Samsung Modes & Routines, Tasker, and IFTTT—evaluating their capabilities to facilitate surveillance and harassment. Our study reveals that these tools can be exploited by abusers today to monitor, impersonate, overload, and control their victims. The current notification and logging mechanisms implemented in these automation apps are insufficient to warn the victim about the abuse or to help them identify the root cause and stop it. We therefore built a detection mechanism to identify potentially malicious Shortcuts recipes and tested it on 12,962 publicly available Shortcuts recipes. We found 1,014 recipes that can be used to surveil and harass others. We then discuss how users and platforms mitigate such abuse potential of automation apps.

