Department of Computer Science and Technology

Security Group

2023 seminars

Expand all Collapse all

View original pageView slides/notesRecording

18 December 14:00Differential Privacy: Theory to Practice for the 2020 US Census / Simson Garfinkel

Webinar & LT2, Computer Laboratory, William Gates Building.

From 2016 through 2021, statisticians and computer scientists at the US Census Bureau worked on the largest and most complex deployment of differential privacy to date: using the modern mathematics of privacy to protect the census responses for more than 330 million residents of the United States as part of the 2020 Census of Population and Housing.

This talk presents a first-hand account of the challenges that were faced trying to apply the still young and evolving theory of differential privacy to the world’s longest running statistical program. These challenges included the need to complete and deploy scientific research on a tight deadline, working in complex deployment environments that had been intentionally crippled to achieve cybersecurity goals, working with a hostile data community of data users who did want formal privacy protections applied to census data, and periodic interference from state and federal officials.

This talk also include a brief introduction to differential privacy and a guide to the growing literature and data products that the 2020 Census produced.

RECORDING : Please note, this event will be recorded and will be available after the event for an indeterminate period under a CC BY -NC-ND license. Audience members should bear this in mind before joining the webinar or asking questions.

View original pageRecording

05 December 14:00Securing Supply Chains with Compilers / Nicholas Boucher, University of Cambridge

Webinar & LT2, Computer Laboratory, William Gates Building.

In this talk we will present a new technique for identifying software supply chain attacks. Supply chain attacks are particularly powerful due to their ability to affect many victims through the compromise of a single shared dependency. While supply chain attacks are not new, they have received significant industry, government, and research attention following multiple high-profile attacks such as SolarWinds and Log4j. The techniques we will present inject metadata into compiled binaries to track the recursive set of dependencies used in its creation. This information is stored in a highly efficient probabilistic data structure to form the Automatic Bill of Materials, or ABOM. In the talk, we will describe the design of the ABOM and outline our vision for how it could be used to perform faster mitigation in future supply chain attacks.

RECORDING : Please note, this event will be recorded and will be available after the event for an indeterminate period under a CC BY -NC-ND license. Audience members should bear this in mind before joining the webinar or asking questions.

View original page

30 November 16:00 25 years of crypto wars and privacy tussles – where next? / Ross Anderson, Duncan Campbell, Ben Collier, Ahana Datta, Merlin Erroll, Gus Hosein, Julian Huppert, Jim Killock, Jen Persson, Sam Smith and Martyn Thomas

Webinar & LT2, Computer Laboratory, William Gates Building.

Man is born free, as Marx told us, but is everywhere in
chains. The Internet promised freedom but is being turned into a tool
of surveillance and control. The Foundation for Information Policy
Research was launched in 1998 during 'Crypto War 1' in the run-up to
the Regulation of Investigatory Powers Act. After a quarter century,
the crypto wars are back with the CSAR in the European Parliament and
the Online Safety Act in the UK; privacy tussles extend from from
medical privacy to AI regulation. It's time to take stock and discuss
what the past can teach and what the future might hold. What's
changed, what's stayed the same, and what's next?

Website: https://www.cl.cam.ac.uk/~rja14/fipr-25th.html

The event will also be livestreamed:
Meeting ID 859 2464 2999
Passcode 444942
https://us02web.zoom.us/j/85924642999?pwd=MzQwTnZZbmhpMEhTSHRYbzlYaE5WQT09

View original pageRecording

28 November 14:00Human-producible Adversarial Examples / David Khachaturov, University of Cambridge

Webinar & FW11, Computer Laboratory, William Gates Building.

Visual adversarial examples have so far been restricted to pixel-level image manipulations in the digital world or have required sophisticated equipment such as 2D or 3D printers to be produced in the physical real world. We present the first ever method of generating human-producible adversarial examples for the real world that requires nothing more complicated than a marker pen. We call them adversarial tags. First, building on top of differential rendering, we demonstrate that it is possible to build potent adversarial examples with just lines. We find that by drawing just 4 lines we can disrupt a YOLO-based model in 54.8% of cases; increasing this to 9 lines disrupts 81.8% of the cases tested. Next, we devise an improved method for line placement to be invariant to human drawing error. We evaluate our system thoroughly in both digital and analogue worlds and demonstrate that our tags can be applied by untrained humans. We demonstrate the effectiveness of our method for producing real-world adversarial examples by conducting a user study where participants were asked to draw over printed images using digital equivalents as guides. We further evaluate the effectiveness of both targeted and untargeted attacks, and discuss various trade-offs and method limitations, as well as the practical and ethical implications of our work.

RECORDING : Please note, this event will be recorded and will be available after the event for an indeterminate period under a CC BY -NC-ND license. Audience members should bear this in mind before joining the webinar or asking questions.

View original pageRecording

24 November 16:00Machine Learning needs Better Randomness Standards: Randomised Smoothing and PRNG-based attacks / Pranav Dahiya, University of Cambridge

Webinar & FW11, Computer Laboratory, William Gates Building.

Randomness supports many critical functions in the field of machine learning (ML) including optimisation, data selection, privacy, and security. ML systems outsource the task of generating or harvesting randomness to the compiler, the cloud service provider or elsewhere in the toolchain. Yet there is a long history of attackers exploiting poor randomness, or even creating it – as when the NSA put backdoors in random number generators to break cryptography. In this paper we consider whether attackers can compromise an ML system using only the randomness on which they commonly rely. We focus our effort on Randomised Smoothing, a popular approach to train certifiably robust models, and to certify specific input datapoints of an arbitrary model. We choose Randomised Smoothing since it is used for both security and safety – to counteract adversarial examples and quantify uncertainty respectively. Under the hood, it relies on sampling Gaussian noise to explore the volume around a data point to certify that a model is not vulnerable to adversarial examples. We demonstrate an entirely novel attack, where an attacker backdoors the supplied randomness to falsely certify either an overestimate or an underestimate of robustness for up to 81 times. We demonstrate that such attacks are possible, that they require very small changes to randomness to succeed, and that they are hard to detect. As an example, we hide an attack in the random number generator and show that the randomness tests suggested by NIST fail to detect it. We advocate updating the NIST guidelines on random number testing to make them more appropriate for safety-critical and security-critical machine-learning applications.

RECORDING : Please note, this event will be recorded and will be available after the event for an indeterminate period under a CC BY -NC-ND license. Audience members should bear this in mind before joining the webinar or asking questions.

View original page

14 November 14:00Red teaming privacy-preserving systems using AI / Yves-Alexandre de Montjoye, Imperial College London

Webinar & LT2, Computer Laboratory, William Gates Building.

Companies and governments are increasingly relying on privacy-preserving techniques to collect and process sensitive data. In this talk, I will discuss our efforts to red team deployed systems and argue that red teaming is essential to protect privacy in practice. I will first shortly describing how traditional de-identification techniques mostly fail in the age of big data. I will then show how implementation choices and trade-offs have enabled attacks against real-world systems, from query-based systems to differential privacy mechanisms and synthetic data. I will then conclude by describing recent successes in using AI to automatically discover vulnerabilities.

View original pageRecording

31 October 14:00Data, Identity and Governance: Evolution and Trajectory of Digital Public Infrastructure in India / Aakansha Natani, IIIT Hyderabad, India

Webinar & FW11, Computer Laboratory, William Gates Building.

In the country of a population of 1.5 billion people, Digital Public Infrastructure (DPI) has played a crucial role in identity verification, financial inclusion and public service delivery in the last one decade. In 2009, the UIDAI-Aadhar project was launched to create unique digital ID based on biometric data to offer authentication as a service. Almost 90 percent of India’s population signed up for Aadhar within a few years and it is now being used for opening bank accounts, filing income-tax return, receiving ration on subsidies, buying a mobile-sim etc. Later on, products like DigiLocker to store digital version of various documents, electronic Know-Your-Customer (KYC) service, and digital signature on demand (e-Sign) were developed in addition to Aadhaar. It further led to the rolling out of Unified Payment Interface (UPI), Direct Payment Transfer and CoWin Application, which managed the vaccination programme in India. Aadhar has become a bonafide proof of identity residing on the cloud and it can be used for the purpose of identification of any individual required for any service delivery transaction.The government is now working towards integrating over 20 new services in the DPI. Also, India has recently signed memoranda of understanding with eight countries offering them India Stack (collection of open APIs) and DPI at no cost and with open-source access.
At the same time, there have been multiple concerns raised by civil society and academia about biometric reliability specially for manual labourers, data breach and cyber risks, misuse and financial frauds, privacy violation and surveillance etc. In this context, this talk will offer an insight into India’s experiments with DPI with a focus on its philosophy, approach and citizen centric concerns.

RECORDING : Please note, this event will be recorded and will be available after the event for an indeterminate period under a CC BY -NC-ND license. Audience members should bear this in mind before joining the webinar or asking questions.

View original pageRecording

17 October 14:00CIA to AIC: Why Cyber Must Enable the Business / Keith A Price, Chief Security Officer, National Highways

Webinar & FW11, Computer Laboratory, William Gates Building.

The traditional mantra of Confidentiality, Integrity, and Availability (CIA) no longer supports business objectives. Availability and Integrity of systems and data are now king and critical to the survivability of organization enterprises, with Confidentiality now coming a distant 3rd (or possibly even 4th and beyond). The 24-hour news cycle contributes heavily to the numbness society now views cyber breaches, with millions of private citizens already having been the victims of their Personally Identifiable Information (PII) having been leaked for, in some cases, decades. My talk will provide insights into my journey as a victim of and a warden against cyber crime breaches. I will touch on why our industry may be contributing to the never-ending revolving door of rising costs of cyber to society.

RECORDING : Please note, this event will be recorded and will be available after the event for an indeterminate period under a CC BY -NC-ND license. Audience members should bear this in mind before joining the webinar or asking questions.

View original pageRecording

10 October 14:00A Tale of Two Internet (In)Security Problems / Michael Schapira, Hebrew University of Jerusalem

Webinar & FW26, Computer Laboratory, William Gates Building.

Despite Herculean efforts, the Internet's communications infrastructure remains alarmingly vulnerable to attacks. In this talk, I will exemplify the Internet's security holes through two prominent examples: Internet routing and time synchronization. I will discuss what I view as the main obstacles to progress on securing the Internet and how I believe that these can be overcome.

RECORDING : Please note, this event will be recorded and will be available after the event for an indeterminate period under a CC BY -NC-ND license. Audience members should bear this in mind before joining the webinar or asking questions.

View original pageRecording

19 September 14:00On the Insecurity of PLC Systems / Eli Biham, Technion

Webinar & LT2, Computer Laboratory, William Gates Building.

In a series of papers, we studied the security of Siemens PLC systems. We first showed that it is possible to fakely and stealthily download any control program into Siemens PLCs, bypassing cryptographic protections (with a variant of HMAC-SHA256 under a supposedly secret key). We could even download a fake executable unrelated to the downloaded source program, thus disabling the ability of the PLC engineers to identify the fake program even if they suspect the PLC behaviour. Following Siemens recommendations to protect against these attacks by using passwords, we studied the passwords schemes and found various vulnerabilities in some versions of the PLCs. A major protection step made by Siemens was to use TLS instead of the Siemens home-grown cryptographic protection. This change seems a good practice in general, but have several weaknesses. One is the long upgrade cycle of firmware in PLCs once a vulnerability is found, which makes any standard (complex) IT software installed on the PLC a security threat. Moreover, we show that the TLS protection allows attacker to perform new strong attacks which were not possible in the home-grown cryptographic version. Last but not least, in a recent openPLC product Siemens use Intel processors that run the (encrypted) PLC firmware and Windows OS on different cores of the same processor, under an hypervisor. Unfortunately, nothing prohibits an attacker to run his own fake version of the PLC firmware. We conclude that the whole security ecosystem and security assumptions of PLCs should be revisited - the currently existing protection schemes do not address the real threats on PLCs. In another work we proposed a framework for a cryptographic protection of PLC communications.

View original pageRecording

21 July 16:00EviHunter: Identifying Digital Forensic Artifacts from Android Apps/Devices via Static & Dynamic Analysis + Android™ App Forensic Artifacts Database / Yong Guan, Iowa State University

Webinar & FW11, Computer Laboratory, William Gates Building.

We are seeing the increasing trend of mobile app evidence in reported cases in the US and globally. Our prior study on the global app markets showed that real-world mobile apps have exceeded 8 million, and many apps have been frequently updated. Commercial mobile device forensic toolkits, such as Cellebrite UEFD, can help physically acquire, search, and recover evidence and reporting. However, most crime labs suffer significantly large backlogs due to an overly-long investigation process (often takes one or two days of an investigator’s efforts per device. Average of 40-80 apps on a device). The Lack of expert knowledge on many of these apps has led to the inability to identify and discover evidence, sometimes misunderstanding the evidence, which resulted in error-prone investigations, subsequently contributing to large backlogs in crime labs. Most existing tools demand the investigators to have the expertise and related experience to utilize them, and the investigative results often heavily depend on the experience and knowledge level of the investigator. With the support of NIST, CSAFE, and many crime labs, we have developed EviHunter, a set of toolkits to simplify and automate the mobile device investigation process with better guarantees in terms of completeness and accuracy. EviHunter leverages taint analysis to retrieve the information flow within an app from source APIs to sink APIs to deliver detailed, accurate, and timely findings of digital evidence stored in the local file system or from a third-party cloud server (e.g., Google/Amazon/Microsoft). Our dynamic EviHunter modified the Android OS and forced the system always enter an interpreter mode where we have inserted taint propagation code inside to follow the data flow in an app. We have cross-validated the analysis result from static and dynamic EviHunter, and are building the integrated results into the Android app forensic artifacts database. With it, practitioners can hopefully reduce the investigation of one device to 20 minutes of work with repeatable and verifiable guarantees. At the end of the talk, we will discuss several future directions this line of research can lead to. We also briefly discuss other interesting forensic, security, and privacy research issues and efforts.

RECORDING: Please note, this event will be recorded and will be available after the event for an indeterminate period under a CC BY -NC-ND license. Audience members should bear this in mind before joining the webinar or asking questions.

View original pageRecording

27 June 14:00A View of the Dark Web through the Lens of NLP and Language Modeling / Youngjin Jin, Korea Advanced Institute of Science & Technology (KAIST)

Webinar - link on talks.cam page after 12 noon Tuesday

The Dark Web has always been a domain of interest for cybersecurity researchers looking to gain insight into emerging cybercriminal activities such as the sharing of illegal content, scams, malware, etc. As studies on the Dark Web commonly require textual analysis of the domain, language models specific to the Dark Web may provide valuable insights to researchers. In this talk, we begin with a brief introduction to the Dark Web, followed by analysis of the Dark Web text using NLP techniques to uncover some characteristics of how language might be used in the Dark Web. We then introduce DarkBERT, a language model pretrained on Dark Web data, and illustrate the benefits that a Dark Web domain specific model like DarkBERT can offer in various use cases.

RECORDING : Please note, this event will be recorded and will be available after the event for an indeterminate period under a CC BY -NC-ND license. Audience members should bear this in mind before joining the webinar or asking questions.

View original pageRecording

20 June 14:00The Unlearning Problem(s) / Anvith Thudi, University of Toronto

Webinar & LT2, Computer Laboratory, William Gates Building.

The talk presents challenges facing the study of machine unlearning. The need for machine unlearning, i.e., obtaining a model one would get without training on a subset of data, arises from privacy legislation and as a potential solution to data poisoning. The first part of the talk discusses approximate unlearning and the metrics one might want to study. We highlight methods for two desirable (though often disparate) notions of approximate unlearning. The second part departs from this line of work by asking if we can verify unlearning. Here we show how an entity can claim plausible deniability, and conclude that at the level of model weights, being unlearnt is not always a well-defined property.

RECORDING : Please note, this event will be recorded and will be available after the event for an indeterminate period under a CC BY -NC-ND license. Audience members should bear this in mind before joining the webinar or asking questions.

View original pageRecording

06 June 14:00The Nym mixnet: Design and Evaluation / Harry Halpin & Ania Piotrowska, Nym Technologies

Webinar & SS03, Computer Laboratory, William Gates Building.

The Nym mixnet is a next-generation continuous time mixnet based on the Loopix design that aims to defeat global passive adversaries capable of observing all network traffic while at the same time maintaining high performance and availability. We will present the system as a whole, including its blockchain-based directory authority and token-based incentive scheme, as well as differences from the original Loopix design that came about during the course of implementation. We will also present for the first time an unpublished head-on comparison in both a closed world and open world setting to Tor.

RECORDING : Please note, this event will be recorded and will be available after the event for an indeterminate period under a CC BY -NC-ND license. Audience members should bear this in mind before joining the webinar or asking questions.

View original pageRecording

16 May 14:00The INCEL Movement: A New Form of Technology-Facilitated Abuse / Melissa Hamilton, University of Surrey

Webinar & LT2, Computer Laboratory, William Gates Building.

_INCELs_(involuntary celibates) are predominantly males with a grievance that women refuse to have sexual relationships with them because of their perceived genetic inferiority. _INCELs_ are radicalised online and have been spreading misogynistic, hateful, and violent messages across various technology platforms. A growing number of _INCELs_ have taken their extremism offline by carrying out mass killings. This presentation demonstrates the dangers posed by _INCELs_. Discussion will also center on methods to detect those who pose the greatest threat to society and to prevent further violence.

RECORDING : Please note, this event will be recorded and will be available after the event for an indeterminate period under a CC BY -NC-ND license. Audience members should bear this in mind before joining the webinar or asking questions.

View original pageView slidesRecording

09 May 14:00Testing the Effectiveness of Targeted Financial Sanctions on Russia: Law or War? / Jason Sharman, University of Cambridge

Webinar & LT2, Computer Laboratory, William Gates Building.

We conducted field experiments before and after the invasion of Ukraine to test the effectiveness of sanctions designed to exclude specified Russian government officials from the international financial system. Researchers impersonated sanctioned individuals and made email solicitations to intermediary firms to establish shell companies and set up corporate bank accounts. Results of responses to the sanctioned names are compared to equivalent solicitations from non-sanctioned individuals in an innocuous placebo condition. If sanctions are effective, private-sector intermediaries should be much less willing to do business with the sanctioned names, relative to the low-risk placebo names, and should also conduct stricter due diligence. The first round of the experiment was implemented before the Russian invasion of Ukraine. The second round was in May of 2022, a few months after the invasion. The pre-invasion approaches from sanctioned names could get access to the financial system almost as easily as the low-risk unsanctioned individuals, suggesting sanctions were ineffective. In contrast, in the post-invasion round, solicitations from sanctioned names were far less likely to receive a response than those from low-risk unsanctioned individuals, suggesting that the sanctions had become much more effective, even though the relevant sanctions law had not changed.

RECORDING : Please note, this event will be recorded and will be available after the event for an indeterminate period under a CC BY -NC-ND license. Audience members should bear this in mind before joining the webinar or asking questions.

View original pageRecording

07 March 14:00Algorithms and the criminal justice system: promises and challenges in deployment and research / Miri Zilka, University of Cambridge

Webinar & FW11, Computer Laboratory, William Gates Building.

The criminal justice (CJ) system has embraced the use of algorithmic tools. They are employed as decision aids, from policing to parole decisions. This may bring benefits such as improved efficiency and consistency, but also raises many concerns. We will discuss the gap between the promised benefits and what is happening in practice, highlighting challenges around data, ethics, and regulations.

On the data front, a key limitation of current ML for CJ research is overfocus on only a few datasets like COMPAS. Moreover, domain context is rarely taken into account even for these datasets. We will discuss our work on enabling researchers to 1) better utilise existing CJ datasets, and 2) create new datasets using human-machine cooperation. The latter relates to our effort to increase transparency in the UK court system by making trial transcripts amenable to quantitative research.

View original pageRecording

28 February 14:00On the motivations and challenges of affiliates involved in cybercrime / Masarah Paquet-Clouston, Université de Montréal

Webinar - link on talks.cam page after 12 noon Tuesday

The cybercrime industry is characterised by work specialisation to the point that it has become a volume industry with various “as-a-service” offerings. One well-established “as-a-service” business model is blackmarket pay-per-install (PPI) services, which outsource the spread of malicious programmes to affiliates. Such a business model represents the archetype of specialisation in the cybercrime industry: a mass of individuals, known as affiliates, specialise in spreading malware on behalf of a service. Extant literature has focused on understanding the scope of such a service and its functioning. However, despite the large number and aggregate effect of affiliates on cybercrime, little research has been done on understanding why and how affiliates participate in such models. This talk summarizes a study that depicts the motivations and challenges of affiliates spreading Android banking Trojan applications through a blackmarket PPI service. In short, we conducted a thematic analysis of over 6,000 of their private chat messages. The findings highlight affiliates’ labour-intensive work and precarious working conditions along with their limited income, especially compared to their expectations. Affiliates’ participation in cybercrime was found to be entangled between legal and blackmarket programmes, as affiliates did not care about programmes’ legal status as long as they yielded money. This study contributes to the literature by providing additional evidence on the downsides of work specialisation emerging from the cybercrime industry.

View original page

21 February 14:00No One to Blame, but... : Fear and Failure in Securing Large Organisations / Ahana Datta, University College London

Webinar & FW11, Computer Laboratory, William Gates Building.

When staff at a critical national infrastructure organisation were recently polled to associate a word with infosec, they chose “fear”. This is a talk about fear and failures - unavoidable and avoidable - their systemic and institutional causes, and how to overcome them. Using case studies from large organisations such as the civil service, aviation, CNI, and media, I will discuss the role of security engineering, purple team operations, threat and compliance. Drawing from experiences as a head of information security/chief information security officer, I attribute poor organisational security to failures in correctly interplaying people, processes, and technology. I will discuss issues such as why user access is breached despite multi-factor authentication and dedicated identity and access teams; why legacy technology remains misunderstood, and friction in patch management; how to know you’ve hired the right (or wrong) expertise, and why we still get hacked despite all the right intentions, if not the right incentives. I will explore third-parties and supply chains, deploying security tools, disjointed processes undermining secure behaviours, the perils of confusing regulation as a threat model for security, incident management and reactive security, as well as why boards struggle to care about information security, and how to make them.

View original page

14 February 14:00Binary Stars: How Crime Shapes Insurance and Insurance Shapes Crime / Anja Shortland, King's College London

Webinar & FW11, Computer Laboratory, William Gates Building.

Crime creates demand for insurance but supplying insurance can inadvertently promote crime. How do insurers reduce uncertainty, pay-outs, and their exposure to extreme and correlated losses from crime? And how do criminals respond to insurers’ attempts to “manage” crime? In this paper we conceptualize insurance and certain types of crime as binary stars, co-evolving as each side innovates and responds to the other side’s innovations. We examine this in five case studies: auto theft, art theft, kidnap and hijack for ransom, ransomware, and payment card fraud. We find that insurers counter criminal innovations that challenge profits by engaging with insureds and third parties: to reduce criminal opportunities, limit damage, salvage stolen property and cap criminal profits. They also increase the risk of detection, capture, and conviction of criminals that defy the (implicit) rules of the game. Across the case studies, “insurance as crime governance” follows a market logic: it erects barriers to opportunistic crime and engages in strategic interaction with sophisticated and organized crime. Insurance tolerates crime if prevention is costlier than covering losses and avoids covering non-profit-motivated crimes.

View original pageRecording

31 January 16:00Influence Policing: Mapping the Rise of Strategic Communications and Digital Behaviour Change within UK Law Enforcement and Security Services / Ben Collier, University Of Edinburgh

Webinar - link on talks.cam page after 12 noon Tuesday

In this talk, I set out an emerging phenomenon in UK law enforcement - the use of digital ‘nudge’ communications campaigns to achieve strategic policing and security goals. Over the last year, we have studied the use of these campaigns by a single force - Police Scotland - in depth, drawing on empirical research conducted with their dedicated strategic communications team. These campaigns, which involve extremely targeted digital communications designed to directly ‘nudge’ behaviour and shape the culture of particular groups, began in counter-radicalisation as part of the UK’s Prevent programme, but have since moved into a range of other policing areas, from hate crime and domestic violence to knife crime and cybercrime. I set out the historical context of these campaigns in the UK, from their roots in social marketing, through the various iterations of the Prevent strategy, the rise of algorithmic digital marketing infrastructures and surveillance capitalist platforms, and their subsequent transfer from counter-terror policing to a range of other areas. Our study explores the developing institutional and professional arrangements around these campaigns in Police Scotland and the wider UK through interviews and document-based research, drawing on case studies of campaigns across a range of areas. Taking these together, we theorise the rise of influence policing as an embryonic but rapidly emerging domain of police practice, and discuss the ethical, institutional, and democratic implications for the future of law enforcement in the UK.

Ben Collier is Lecturer in Digital Methods at the Institute of Science, Technology, and Innovation Studies at the University of Edinburgh. His research focuses on digital infrastructure as a site of power and resistance, including mixed-methods studies of cybercrime communities, law enforcement engagements with Internet infrastructure, and an upcoming book with MIT Press which maps a cultural history of the Tor anonymity network.