Department of Computer Science and Technology

Security Group

2022 seminars

Expand all Collapse all

View original pageRecording

13 December 16:00A Sociotechnical Audit: Assessing Police use of Facial Recognition / Evani Radiya-Dixit, Minderoo Centre for Tech & Democracy

Webinar - link on talks.cam page after 12 noon Tuesday

The adoption of facial recognition by police has been the subject of significant debate. Police often advocate for this technology to help prevent crime, but this technology can also threaten fundamental rights. We propose a “sociotechnical audit” as a tool to help outside stakeholders evaluate the ethics and legality of police use of facial recognition. Developed for England and Wales, this audit extends to all types of facial recognition for identification, including live, retrospective, and mobile phone facial recognition. We developed this audit using existing literature and feedback from academia, government, civil society, and police organisations. The audit can help reveal the risks of facial recognition, evaluate legal compliance, and inform policy and oversight.

We apply this audit to three British police deployments and find that all three fail to meet ethical and legal standards for the governance of facial recognition. We highlight the lack of (a) evidence of a lawful interference with privacy rights, (b) transparent evaluations of discrimination, (c) measures for remedy for harmed persons, and (d) regular oversight from an independent ethics body and the wider community. The harms of facial recognition in policing move beyond the issue of bias in the technology, and these broader issues of privacy, discrimination, accountability, and oversight need urgent attention. Ultimately, we recommend regulators, civil society groups, and researchers to use this audit to scrutinise police use of facial recognition, evaluate biometric technologies in other contexts, and join calls for a ban on police use of facial recognition in public spaces.

View original pageRecording

29 November 14:00The Security of Post-Quantum Telco Networks, or changing 10 billion door-locks in 197 countries / Zygmunt A Lozinski, IBM

Webinar & FW11, Computer Laboratory, William Gates Building.

There are around 10 billion devices and systems in the world’s telecom networks. In 1994 Peter Shor worked out how a theoretical quantum computer could factor numbers quickly, and so break RSA and related public key cryptosystems. Now we are planning for this eventuality. NIST recently selected 4 new Post-Quantum Cryptography algorithms to replace RSA, DSA and ECDSA. Governments and civil society groups are recommending organizations should start to plan the implementation of Post-Quantum Cryptography. The Telecoms and Banking industries are taking a lead to create detailed roadmaps to implement Post-Quantum Cryptography. The GSMA (the trade association of the world’s 1000-odd mobile operators) announced the creation of a Post-Quantum Telco Network Task Force in September 2022. Implementing PQC across an industry means creating an end-to-end view of the cryptographic landscape. A detailed inventory of where and why cryptography is used in devices, networks, systems, data-stores and interfaces. The thousands of open-source projects, open standards and industry-specific recommendations that rely on cryptography. The vendor products that implement cryptographic algorithms, sometimes invisibly. The business processes. And then prioritising the changes. This talk will describe how the telecom industry will respond to this challenge. Eventually, all industries will have to make similar changes. And one final question: how do we achieve industry-wide crypto agility, for the next time we need to update our cryptosystems?

View original pageRecording

22 November 14:00Bad men, good men, and loving women: Gender Constructions in the UK’s Online Action Counters Terrorism (ACT) Campaign / Harmonie Toros, University of Kent

Webinar & FW11, Computer Laboratory, William Gates Building.

The United Kingdom is widely considered a world leader in its counterterrorism (CT), countering violent extremism (CVE) and preventing violent extremism (PVE) campaigns. The Action Counters Terrorism Campaign is a public-facing campaign of the UK government aimed at raising the general public’s awareness of how it can support its CT/CVE/PVE efforts. A narrative analysis of the campaign’s YouTube channel (2017-2020) reveals a clear dominant narrative that “ordinary people” can assist in CT/CVE/PVE by being alert and following basic rules (such as Run, Hide, Tell). A gendered narrative analysis reveals far more surprising results: The terrorist threat is understood as exclusively male and only men are viewed as at risk of radicalization. Women are predominantly portrayed in relation to men in their lives (wives, mothers). Through their love and care, women can support efforts to save them by noticing when “something is wrong.” Offering an original methodological approach, this article reveals how the gendered constructions of the British awareness campaign are so engrained in powerful understandings of gender and political violence that they ignore even widespread public security debates, such as those surrounding British girls and women who traveled to Iraq/Syria to join DAESH.

View original pageRecording

15 November 14:00Towards Meaningful Stochastic Defences in Machine Learning / Ilia Shumailov, University of Oxford

Webinar & FW11, Computer Laboratory, William Gates Building.

Machine learning (ML) has proven to be more fragile than previously thought, especially in adversarial settings. A capable adversary can cause ML systems to break at training, inference, and deployment stages. In this talk, I will cover the recent work on attacking and defending machine learning pipelines using stochastic defences; I will describe how, seemingly powerful defences fail to provide any security and end up being vulnerable to even standard attackers. I will then demonstrate a number of possible randomness-based defences that can provide theoretical and practical performance improvements.

Bio: Ilia Shumailov holds a PhD in Computer Science from University of Cambridge, specialising in Machine Learning and Computer Security. During the PhD under the supervision of Prof Ross Anderson Ilia has worked on a number of projects spanning the fields of machine learning security, cybercrime analysis and signal processing. Following the PhD, Ilia joined Vector Institute in Canada as a Postdoctoral Fellow, where he worked under the supervision of Prof Nicolas Papernot and Prof Kassem Fawaz. Ilia is currently a Junior Research Fellow at Christ Church, University of Oxford.

View original page

11 November 16:00Picking on locks: early security, cybersecurity, and the sophisticated criminal trope. / Yanna Papadodimitraki, University of Cambridge

Webinar & FW11, Computer Laboratory, William Gates Building.

Cybercrime and cybersecurity have taken centre stage in our lives due to technology adoption. Relevant public perceptions can stem from engrained views on security and crime, despite sometimes thinking of them as new. This paper is exploring public perceptions of cybercrime and cybersecurity through the lens of history. It draws parallels between early security (Victorian lockpicking competitions) and cybersecurity, historical and contemporary modes of advertising (cyber)security solutions, and burglars and cybercriminals, to understand modern day attitudes towards cybersecurity and cybercrime.

View original pageRecording

25 October 14:00Secure and efficient networks / Oleh Stupak, University of Cambridge

Webinar & FW11, Computer Laboratory, William Gates Building.

This study aims to understand efficient network formation and optimal defensive resource distribution in the presence of an intelligent attacker. We present a two-player dynamic framework in which the Defender and the Attacker compete in a network formation and defence game with heterogeneous vertices' values. Such a model allows for studying the trade-off between network efficiency and security. Contrary to the literature, we find that a centrally protected star network does not yield the maximum payoff for the defending side in most circumstances, even being the most secure network formation. Additionally, it reveals the new type of network that often arises in an equilibrium of the games with limited defensive resources -- a maxi-core network.

View original pageRecording

18 October 14:00Technology-Facilitated Abuse: The Role of Tech in the Context of Intimate Partner Violence / Leonie Tanczer, University College London

Webinar & FW11, Computer Laboratory, William Gates Building.

In recent years, forms of online harassment and sexual abuse facilitated through information and communication technologies (ICT) emerged. These ICT-supported attacks range from cyberstalking to online behavioural control. To date, many efforts to tackle technology-facilitated abuse ("tech abuse") have centred on "conventional" cyber risks such as abuses on social media platforms and restrictions on devices such as phones and laptops. However, emerging technologies such as "smart", Internet-connected devices as well as autonomous systems such as robots will expand domestic violence victims’ risk trajectories further. To provide an overview of this evolving research and policy area, ongoing efforts of the "Gender and Tech" Lab (formerly: Gender and IoT) will be shared. The presentation will give listeners an insight into the latest developments in this space and aspires to challenge assumptions around domestic abuse, (cyber)crime, and threats/risks.

View original pageRecording

11 October 14:00Generative Language Model, Deepfake, and Fake News 2.0: Scenarios and Implications / Dongwon Lee, Penn State University

Webinar & LT1, Computer Laboratory, William Gates Building.

The recent explosive advancements in both generative language models in NLP and deepfake-enabling methods in Computer Vision have greatly helped trigger a new surge in AI research and introduced a myriad of novel AI applications. However, at the same time, these new AI technologies can be used by adversaries for malicious usages, opening a window of opportunity for fake news creators and state-sponsored hackers. In this talk, I will present a few plausible scenarios where adversaries could exploit these cutting-edge AI techniques to their advantage, producing more sophisticated fake news by synthesizing realistic artifacts or evading detection of fake news from state-of-the-art detectors. I will conclude the talk by discussing the important implications of the new type of fake news (i.e., Fake News 2.0) and some future research directions.

Bio: Dongwon Lee is a professor and director of Ph.D. program in the information school (IST) at Penn State University, USA. He is also an ACM Distinguished Scientist (2019) and Fulbright Cyber Security Scholar (2022). Before starting at Penn State, he worked at AT&T Bell Labs, NJ, and obtained his Ph.D. in Computer Science from UCLA. From 2015 to 2017, he has also served as a Program Director at National Science Foundation (NSF), co-managing cybersecurity research and education programs and contributing to the development of national research priorities. In general, he researches on the problems in the intersections of data science, machine learning, and cybersecurity. Since 2017 he has led the SysFake project at Penn State, investigating computational and socio-technical solutions to better combat fake news. More details of his research can be found at: http://pike.psu.edu/. During the academic year of 2022-2023, he is visiting University of Cambridge as a Fulbright scholar and a fellow in Churchill college.

View original page

19 August 16:00Analog vs. Digital Epsilons: Implementation Considerations for Differential Privacy / Olya Ohrimenko, University of Melbourne

Webinar & FW11, Computer Laboratory, William Gates Building.

Differential privacy (DP) provides a rigorous framework for releasing data statistics while bounding information leakage. It is currently a de facto privacy framework that has received significant interest from the research community and has been deployed by the U.S. Census Bureau, Apple, Google, Microsoft, and others. However, DP analysis often assumes a perfect computing environment and building blocks such as random noise distribution samplers. Unfortunately, a naive implementation of DP mechanisms can invalidate their theoretical guarantees.

In this talk, I will highlight two attacks based on implementation flaws in the noise generation commonly used in DP systems: floating-point representation attack against continuous distributions and timing attacks against discrete distributions. I will then show that several state-of-the-art implementations of DP are susceptible to these attacks as they allow one to learn the values being protected by DP. Our evaluation demonstrates success rates of 92.56% for floating-point attacks in a machine learning setting and 99.65% for end-to-end timing attacks on private sum. I will conclude with suggested mitigations, emphasising that a careful implementation of DP systems may be as important as it is for cryptographic libraries.

The talk is based on joint work with Jiankai Jin (The University of Melbourne), Eleanor McMurtry (ETH Zurich) and Benjamin Rubinstein (The University of Melbourne), that appeared in IEEE Symposium on Security and Privacy 2022.

Bio: Olya Ohrimenko is an Associate Professor at The University of Melbourne which she joined in 2020. Prior to that she was a Principal Researcher at Microsoft Research in Cambridge, UK, where she started as a Postdoctoral Researcher in 2014. Her research interests include privacy and integrity of machine learning algorithms, data analysis tools and cloud computing, including topics such as differential privacy, verifiable and data-oblivious computation, trusted execution environments, side-channel attacks and mitigations. Recently Olya has worked with the Australian Bureau of Statistics and National Bank Australia. She has received solo and joint research grants from Facebook and Oracle and is currently a PI on an AUSMURI grant.

View original pageRecording

15 July 16:00Mapping the Geography of Cybercrime - findings from an expert survey / Miranda Bruce, University of Oxford

Webinar & FW11, Computer Laboratory, William Gates Building.

The global geography of cybercriminal offenders is not well understood. Existing data on the subject are not well suited to establishing the true location of offenders, nor can they be scaled up to accurately compare rates of cybercrime across nations. We propose a novel approach to this problem: an expert survey with leading cybercrime investigators and intelligence professionals from across the world. In 2021 we asked 92 experts to nominate the countries they believe are the most significant sources of five different types of cybercrime, and then to rate the impact, technical skill, and professionalism of those crimes. This paper discusses the survey’s initial results, limitations, and future directions for the project.

Miranda Bruce is a Postdoctoral Fellow at the University of Oxford. She contributes to the CRIMGOV project, exploring the sociological and geographical elements of cybercrime. Her past research focused on the Internet of Things and its social implications, especially the use of social theory to rethink how humans and machines are connected. She was the lead editor of a Routledge collection on belonging, runs the UNSW Masters course on Cybercrime in Australia, and has developed and convened several advanced university courses.

View original pageRecording

14 June 14:00No Spring Chicken: Quantifying the Lifespan of Exploits in IoT Malware Using Static and Dynamic Analysis / Arwa Al Alsadi, Delft University of Technology

Webinar - link on talks.cam page after 12 noon Tuesday

The Internet of things (IoT) is composed by a wide variety of software and hardware components that inherently contain vulnerabilities. Previous research has shown that it takes only a few minutes from the moment an IoT device is connected to the Internet to the first infection attempts. Still, we know little about the evolution of exploit vectors.

In this talk, we will be discussing which vulnerabilities are being targeted in the wild, how has the functionality changed over time, and for how long are vulnerabilities being targeted? Understanding these questions can help in the secure development, and deployment of IoT networks.

View original page

10 June 16:00Exploring Internet services mis-configuration at scale / Danny Willems and Gregory Boddin, LeakIX; Raphael Proust, Nomadic Labs

Webinar & FW11, Computer Laboratory, William Gates Building.

In this talk we would demonstrate that not only vulnerabilities in exposed services can be dangerous but also the issue with leaving those services un-configured or with default credentials. We will also touch on the Devops side and talk about left over deployment artifacts that can disclose information and credentials about an infrastructure.

View original pageView slides/notesRecording

07 June 14:00Reward Sharing for Mixnets / Claudia Diaz, KU Leuven

Webinar - link on talks.cam page after 12 noon Tuesday

In this talk I will present a reward sharing scheme for incentivized network privacy infrastructures such as the Nym mixnet. The talk is based on a paper co-authored with Aggelos Kiayias and Harry Halpin that will soon appear in the MIT Cryptoeconomic Systems (CES) journal. The reward scheme uses a bootstrapping reserve and a bandwidth pricing mechanism to fund a decentralized, economically sustainable mixnet that can scale, as increased usage translates into fees that allow the mixnet to grow to meet demand. The scheme periodically selects mix nodes to mix packets proportionally to their reputation, which signals the confidence of stakeholders in a node’s reliability and performance. Selected mix nodes are then rewarded proportionally to their reputation and performance, and share their rewards with the stakeholders supporting them. We prove the properties of the scheme with a game-theoretic analysis, showing that the equilibria promote decentralization and mixnet performance. We further evaluate the scheme empirically via simulations that consider non-ideal conditions and show that the mixnet can be viable under realistic assumptions.

View original page

27 May 16:00PostCog: A “Search Engine” Enabling Interdisciplinary Research into Underground Forums at Scale / Anh V. Vu, University of Cambridge

Webinar & FW11, Computer Laboratory, William Gates Building.

Underground forums provide useful insights into cybercrime, where researchers analyse underlying economies, key actors, their discussions and interactions, as well as different types of cybercrime. This interdisciplinary topic of study incorporates expertise from diverse areas, including computer science, criminology, economics, psychology, and other social sciences. Historically, there were significant challenges around access to data, but there are now research datasets of millions of messages scraped from underground forums. The problems now stem from the size of these datasets and the technical nature of methods and tools available for data sampling and analysis at scale, which make data exploration difficult for non-technical users.

We introduce PostCog, a web application developed to support users from both technical and non-technical backgrounds in forum analyses, such as search, information extraction and cross-forum comparison. The prototype’s usability is evaluated through two user studies with expert users of the CrimeBB dataset. PostCog is made available for academic research upon signing an agreement with the Cambridge Cybercrime Centre.

Zoom details: https://us02web.zoom.us/j/81692776088?pwd=Z1lBaG1jOUJWWnFsK29oQWdUQjNYZz09

Meeting ID: 816 9277 6088 , Passcode: 874860

View original pageRecording

24 May 14:00 Are You Really Muted?: A Privacy Analysis of Mute Buttons in Video Conferencing Apps / Kassem Fawaz, University of Wisconsin-Madison

Webinar - link on talks.cam page after 12 noon Tuesday

In the post-pandemic era, video conferencing apps (VCAs) have converted previously private spaces — bedrooms, living rooms, and kitchens — into semi-public extensions of the office. And for the most part, users have accepted these apps in their personal space, without much thought about the permission models that govern the use of their personal data during meetings. While access to a device’s video camera is carefully controlled, little has been done to ensure the same level of privacy for accessing the microphone. In this work, we ask the question: what happens to the microphone data when a user clicks the mute button in a VCA? We first conduct a user study to analyze users' understanding of the permission model of the mute button. Then, using runtime binary analysis tools, we trace raw audio in many popular VCAs as it traverses the app from the audio driver to the network. We find fragmented policies for dealing with microphone data among VCAs — some continuously monitor the microphone input during mute, and others do so periodically. One app transmits statistics of the audio to its telemetry servers while the app is muted. Using network traffic that we intercept en route to the telemetry server, we implement a proof-of-concept background activity classifier and demonstrate the feasibility of inferring the ongoing background activity during a meeting — cooking, cleaning, typing, etc. We achieved 81.9% macro accuracy on identifying six common background activities using intercepted outgoing telemetry packets when a user is muted.

Bio: Kassem Fawaz is an Assistant Professor in the Electrical and Computer Engineering department at the University of Wisconsin–Madison. He earned his Ph.D. in Computer Science and Engineering from the University of Michigan. His research interests include the security and privacy of the interactions between users and connected systems. He was awarded the Caspar Bowden Award for Outstanding Research in Privacy Enhancing Technologies in 2019. He also received the National Science Foundation CAREER award in 2020, Google Android Security and PrIvacy REsearch (ASPIRE) award in 2021, and Facebook Research Award in 2021. His research is funded by the National Science Foundation, Federal Highway Administration, and the Defense Advanced Research Projects. His work on privacy has been featured in several media outlets, such as the BBC, Wired, the Wall Street Journal, the New Scientist, and ComputerWorld.

View original pageRecording

17 May 14:00VerLoc: Verifiable Localization in Decentralized Systems / Katharina Kohls, Radboud University

Webinar - link on talks.cam page after 12 noon Tuesday

We tackle the challenge of reliably determining the geolocation of nodes in decentralized networks, considering adversarial settings and without depending on any trusted landmarks. In particular, we consider active adversaries that control a subset of nodes, announce false locations and strategically manipulate measurements. To address this problem we propose, implement and evaluate VerLoc, a system that allows verifying the claimed geo-locations of network nodes in a fully decentralized manner. VerLoc securely schedules roundtrip time (RTT) measurements between randomly chosen pairs of nodes. Trilateration is then applied to the set of measurements to verify claimed geo-locations. We evaluate VerLoc both with simulations and in the wild using a prototype implementation integrated in the Nym network (currently run by thousands of nodes). We find that VerLoc can localize nodes in the wild with a median error of 60 km, and that in attack simulations it is capable of detecting and filtering out adversarial timing manipulations for network setups with up to 20 % malicious nodes.

View original pageRecording

10 May 14:00“You’re never left alone:” the use of digital technologies in domestic abuse / Lisa Sugiura, Jason R.C. Nurse, Jacki Tapley and Chloe Hawkins

Webinar - link on talks.cam page after 12 noon Tuesday

In this seminar, based on Home Office funded research, the increasing ways that digital technologies are being used by domestic abuse perpetrators to monitor, threaten, and humiliate their victims, will be discussed. Technology-facilitated domestic abuse (TFDA) is progressively employed within controlling and coercive relationships and involves a wide range of abusive behaviours including: the use of spyware to access and accounts and monitor victim’s movements, the creation of fake accounts to harass or impersonate victims, the use of covert devices and the Internet of Things to stalk victims, and image-based sexual abuse to degrade victims. The ease, availability, and familiarity of everyday technologies means that technical skills are unnecessary to perpetuate most forms of TFDA, meaning these tools are routinely exploited by perpetrators. The harms of TFDA are no less serious than those arising from other forms of coercive and controlling behaviours and physical violence. Offline and online abuse is interconnected and within the context of domestic abuse, often co-occurring. The rapidly developing specific instances and tactics of TFDA, however, are often inadequately considered or overlooked in policy, legislative and support responses.

View original pageRecording

03 May 14:00The gap between research and practice in authentication / Arvind Narayanan, Princeton University

Webinar - link on talks.cam page after 12 noon Tuesday

I’ll describe a recent line of work on identifying authentication vulnerabilities in mobile phone services and websites. I’ll show how authentication practice has lagged behind research and, in turn, research has not paid attention to the practical constraints that made these vulnerabilities more likely. Finally, I will draw from the experience of this research to share some thoughts on how information security research can better serve societal needs.

This talk is based on joint work with Kevin Lee, Ben Kaiser, Sten Sjöberg, and Jonathan Mayer.

View original pageRecording

26 April 14:00Attacking and Fixing the Bitcoin Network / Muoi Tran, National University of Singapore

Webinar - link on talks.cam page after 12 noon Tuesday

Much existing research in the blockchain field has focused on cryptographic primitives and improved distributed blockchain protocols. The network required to connect these distributed systems, however, has received relatively little attention.
Yet, there is increasing evidence that the network can become the bottleneck and root cause for some of the most pressing challenges blockchains face today.

In this talk, I will introduce a few recent research projects from my group that focus on attacking and securing Bitcoin’s peer-to-peer networking protocol. I will begin with our novel Bitcoin partitioning attack, dubbed Erebus, that stealthily isolates one or more Bitcoin peer nodes from the rest of the network. Then, I will discuss how we have collaborated with Bitcoin developers to mitigate the Erebus attacks and present some remaining questions. Finally, I will mention a few open problems in securing the networking layer of blockchain in general.

BIO:
Muoi Tran is a Research Fellow at the National University of Singapore, where he recently obtained a Ph.D. degree under the guidance of Zhenkai Liang and Min Suk Kang (KAIST). His research interests broadly include network security, blockchain security and privacy.
He was selected as one of the Microsoft Research Asia fellows in 2019, a distinguished shadow reviewer at IEEE S&P 2021, and an awardee of the Dean's Graduate Research Excellence Award at NUS in 2022.

View original pageView slidesRecording

22 March 14:00Risk and Resilience: Promoting Adolescent Online Safety and Privacy through Human-Centered Computing / Pamela Wisniewski, University of Central Florida

Webinar - link on talks.cam page after 12 noon Tuesday

Privacy is a social mechanism that helps people regulate their interpersonal boundaries in a way that facilitates more meaningful connections and safer online interactions with others. Dr. Wisniewski’s research focuses on 1) community-based approaches for helping people (adults and teens) co-manage their online privacy with people they trust, 2) teen-centric approaches to online safety that promote self-regulation and empower teens to effectively manage online risks, and 3) online safety interventions that protect our most vulnerable youth from severe online risks, such as sexual predation. Through her research trajectories above, she has become a leading HCI scholar at the intersections of adolescent online safety, developmental science, interaction design, and human-centered computing. She has created an impactful research program that intertwines research and education to engage teens, college students, experts in adolescent psychology, experts in participatory design and research methods, community partners, and industry stakeholders in a community-based effort to build the village needed to protect our youth from online risks by empowering them to protect themselves. During her talk, Dr. Wisniewski will provide an overview of her on-going grant-funded research, as well as her career-long aspirations as a “scholar activist,” which is someone committed to scholarly research and scientific rigor, but equally committed to their situations of origin and are passionate about making the world a better place through their learned experience.

View original pageRecording

18 March 16:00Design rules and Maxims for insecurity engineering for lock designs / Marc Weber Tobias, School of Engineering, University of Pittsburgh

Webinar & FW11, Computer Laboratory, William Gates Building.

Marc Weber Tobias and his team are senior security analysts for all of the major lock manufacturers in the U.S., Europe, and the Middle East. He has developed a comprehensive set of axioms, principles, and rules for design engineers and vulnerability assessment teams to guide them in producing security products that are less likely to be easily attacked and compromised.

The lecture includes a discussion and case examples of a failure of engineers to connect the dots and understand basic theories involving the compromise of locks and safes. The problem in the industry pervades every kind of product, as discussed in this presentation. This includes the famous kryptonite bike lock fiasco, gun locks that are opened by a five year old child, gun storage cases that were accessed by a child that led to litigation, the design of a safe for the storage of weapons that ended in tragedy, and a clever and very defective electronic padlock for protecting parcels delivered to residences.

Marc Tobias is presently writing a detailed text on this subject entitled “Tobias on Locks and Insecurity Engineering” which should be available sometime in 2023.

View original pageRecording

08 March 10:00Sex, money, and the mating market: How big data helps us understand sexual politics / Khandis Blake, University of Melbourne

Webinar - link on talks.cam page from Monday afternoon

Why are sex differences the result of biological and economic forces? How do mating market conditions affect gendered violence? Why are so many people – including women –concerned with regulating female sexuality? For too long, our approach to gendered outcomes has quarantined the biological from the sociocultural, as if one has nothing to do with the other. Yet a close understanding of the drivers of male-male aggression, intimate partner violence, and female beauty practices shows that the biological and sociocultural often intertwine. In this talk I review a growing body of my research that uses big data to implicate mating market conditions in gendered outcomes. Using data from 113 nations, I will explain how income inequality affects the local female mating ecology and thus incentivizes intrasexual competition and status-seeking. I will then show that by disadvantaging male mate competition, the operational sex ratio and manufacturing shocks in the USA drive troubling online sub-cultures linked to gendered violence (i.e., “inCel” ideology). By linking online behaviors with offline violence, I show how social media can be used as a barometer to identify prospective hotspots of crime. By incorporating insights from behavioral ecology, social psychology, economics, and international security, I provide a functional account of gender conflict, highlighting the value of integrating competing disciplinary perspectives to understand these phenomena. With it I offer a new approach to understanding how and why sexual conflict manifests, and how attitudes toward gender are related to potential fitness payoffs.

BIO

Dr Blake is an expert on sexual politics who combines insights from evolutionary biology, psychology and big data to understand conflict and competition among people. Her research addresses big issues that profoundly influence wellbeing, including personal agency and empowerment, intimate partner violence and the varied ways in which people seek and enact status. Dr Blake convenes TwitPlat, a database of 6 billion geolocated Twitter posts spanning 9 years, and the Daily Cycle Diary, an online platform that helps women to understand how their menstrual cycle affects their psychology. She is the holder of seven international and eight domestic awards for research excellence, and has featured her work at the Festival of Dangerous Ideas, Melbourne Writer’s Festival, Melbourne International Film Festival, in The Age, The Herald Sun, The Sydney Morning Herald, and on ABC News and The Project. She is an ARC DECRA Fellow and a lecturer at the Melbourne School of Psychological Science at the University of Melbourne.

View original pageRecording

15 February 14:00Machine Learning in context of Computer Security / Ilia Shumailov, University of Cambridge

Webinar & LT2, Computer Laboratory, William Gates Building.

Machine learning (ML) has proven to be more fragile than previously thought, especially in adversarial settings. A capable adversary can cause ML systems to break at training, inference, and deployment stages. In this talk, I will cover my recent work on attacking and defending machine learning pipelines; I will describe how, otherwise correct, ML components end up being vulnerable because an attacker can break their underlying assumptions. First, with an example of attacks against text preprocessing, I will discuss why a holistic view of the ML deployment is a key requirement for ML security. Second, I will describe how an adversary can exploit the computer systems, underlying the ML pipeline, to develop availability attacks at both training and inference stages. At the training stage, I will present data ordering attacks that break stochastic optimisation routines. At the inference stage, I will describe sponge examples that soak up a large amount of energy and take a long time to process. Finally, building on my experience attacking ML systems, I will discuss developing robust defenses against ML attacks, which consider an end-to-end view of the ML pipeline.

View original pageRecording

08 February 14:00Trojan Source: Invisible Vulnerabilities / Nicholas Boucher, University of Cambridge

Webinar & LT2, Computer Laboratory, William Gates Building.

We present a new type of attack in which source code is maliciously encoded so that it appears different to a compiler and to the human eye. This attack exploits subtleties in text-encoding standards such as Unicode to produce source code whose tokens are logically encoded in a different order from the one in which they are displayed, leading to vulnerabilities that cannot be perceived directly by human code reviewers. ‘Trojan Source’ attacks, as we call them, pose an immediate threat both to first-party software and of supply-chain compromise across the industry. We present working examples of Trojan-Source attacks in C, C++, C#, JavaScript, Java, Rust, Go, and Python. We propose definitive compiler-level defenses, and describe other mitigating controls that can be deployed in editors, repositories, and build pipelines while compilers are upgraded to block this attack.

View original pageRecording

25 January 14:00Incident Response as a Lawyers' Service / Daniel Woods, University of Innsbruck, Austria

Webinar - link on talks.cam page after 12 noon Tuesday

This talk describes an increasingly popular model of cyber incident response in which external law firms run the show. This involves operating a 24/7 hotline in order to act as the victim firm's first point of contact, the law firm selecting and hiring external consultants like the forensics investigator and public relations advisor, and telling those investigators how findings should be documented and shared. At least 4,000 incidents were responded to under this model in 2018. I will present empirical evidence about how cyber insurance popularised this way of responding to incidents. I then describe preliminary findings on the downstream impacts like the efficiency of investigations, the extent of post-breach remediation, information sharing, and work culture in industry.
The talk is based on the paper: "Incident Response as a Lawyers' Service" in IEEE Security & Privacy with doi: 10.1109/MSEC.2021.3096742

View original pageRecording

18 January 14:00Transcending Transcend: Revisiting Malware Classification with Conformal Evaluation / Federico Barbero, University of Cambridge

Webinar - link on talks.cam page after 12 noon Tuesday

Machine learning for malware classification shows encouraging results, but real deployments suffer from performance degradation as malware authors adapt their techniques to evade detection. This phenomenon, known as concept drift, occurs as new malware examples evolve and become less and less like the original training examples. One promising method to cope with concept drift is classification with rejection in which examples that are likely to be misclassified are instead quarantined until they can be expertly analyzed.

In this talk, I will discuss our IEEE S&P 2022 paper which proposes TRANSCENDENT, a rejection framework built on Transcend, a recently proposed strategy based on conformal prediction theory. In particular, I will hold your hand through the formal treatment of Transcend and the newly proposed conformal evaluators, with their different guarantees and computational properties. TRANSCENDENT outperforms state-of-the-art approaches while generalizing across various malware domains and classifiers. These insights support both old and new empirical findings, making Transcend a sound and practical solution for the first time.

View original pageRecording

14 January 15:00Hansa Market, Cyberbunker, and Encrochat: The Security Practices of Organized Crime / Erik van de Sandt, Dutch National Police and University of Bristol

Webinar

The dominant academic and practitioners’ perspective on security evolves around law-abiding recipients (i.e., referent objects) of security who are under attack by law-breaking threat agents. Yet organized (cyber) crime has threat agents as well, and is therefore in need of security. Commission and protection of crime are inextricably linked. There is a vast underground economy that caters large numbers of traditional and cyber criminals with specialized security products and services. Think of Hansa as a secure market place, Encrochat as a secure telecom provider and Cyberbunker as a secure (i.e., bulletproof) hosting provider. Based on the insights of the book ‘The Deviant Security of Cyber Crime’ and past and recent cyber operations of the Dutch National High Tech Crime Unit such as the DoubleVPN investigation, this presentation lets us realize that cyber criminals have many more security controls at their disposal than encryption, but also face all kinds of minor, major and even unavoidable vulnerabilities.