Department of Computer Science and Technology

Security Group

2021 seminars

Expand all Collapse all

View original page

07 December 15:00Why Johnny doesn’t write secure software? / Awais Rashid, University of Bristol

Webinar

Software is in the very fabric of the systems we utilise in our daily lives - from online banking to social media through to critical infrastructures that bring water and electricity to our homes and drive systems such as transportation, health and governmental services. Yet vulnerabilities in software continue to be a recurring issue despite major advances in libraries, APIs and tools to help developers write secure software and test the security of their software systems. Almost 20 years ago, Alma Whitten and Doug Tygar wrote about the usability challenges faced by an archetypal user (Johnny) when utilising cryptography to secure communications. Developers face similar challenges when utilising the security libraries, APIs and tools at their disposal. In this talk, I will discuss insights from over 5 years of research on these struggles and their potential impact on the security of the resultant software. I will conclude by discussing ongoing work on exploring developers’ understanding of hardware security advances such as CHERI and how these may shape the way they develop software on future secure hardware architectures.

Bio: https://research-information.bris.ac.uk/en/persons/awais-rashid

View original page

30 November 14:00Securing the Future: Futures Literacy in Cyber Security / Genevieve Liveley, University of Bristol

Webinar

Across a wide spectrum of activities, cyber security depends on rigorous futures thinking in order to inform our decision-making in the present – whether that’s designing trusted products, managing cyber security risk, or assessing the potential risks and benefits of EmTech and its cyber security implications in different contexts (from local government and critical national infrastructure, to wider society, families, and individuals). This seminar will explore examples of good practice of such futures thinking, and will ask what is needed to enhance futures expertise for different cyber security communities.

View original pageRecording

23 November 14:00Teardown of encrypted USB Flash drives / Sergei Skorobogatov, University of Cambridge

LT1, Computer Laboratory, William Gates Building.

There are many solutions for keeping user data secure on USB Flash drives. However, the most reliable ones are based on hardware encryption. Many encrypted USB Flash drives are certified to the high FIPS 140-2 Level 3 standard. However, very little public research has been done to evaluate the hardware security of those devices.

The purpose of this talk is to present a teardown and feasibility study of IronKey and other encrypted USB Flash drives. As a result the users of these devices could be assured about the real level of the security protection they get. More than 20 different devices were teared down and their hardware solutions evaluated against possible attacks. Some potential flaws will be exposed and those findings are likely to stimulate further research into specific solutions being used to protect the user data.

RECORDING : Please note, this event will be recorded and will be available after the event for an indeterminate period under a CC BY -NC-ND license. Audience members should bear this in mind before joining the webinar or asking questions.

View original pageRecording

16 November 14:00Cybersecurity for Democracy: Providing Independent Auditing Frameworks for Platform Accountability / Damon McCoy, New York University

Webinar

Large platforms, such as Facebook, have become the mediators of public discourse which has created tensions since these platforms are designed to maximize engagement and advertising revenue. In this talk, I will motivate why it is essential to conduct independent auditing of these platforms and present our frameworks for improving platform accountability. Using our auditing frameworks, I will present our results from two case studies focused on independently auditing Facebook's political advertising transparency and engagement difference between partisan news content based on factualness. Finally, I will discuss challenges to auditing platforms and present recommendations for improving access to public platform content.

RECORDING : Please note, this event will be recorded and will be available after the event for an indeterminate period under a CC BY -NC-ND license. Audience members should bear this in mind before joining the webinar or asking questions.

View original pageRecording

09 November 14:00Blind Backdoors in Deep Learning / Eugene Bagdasaryan, Cornell Tech

Webinar

We investigate a new method for injecting backdoors into machine learning models, based on compromising the loss-value computation in the model-training code. We use it to demonstrate new classes of backdoors strictly more powerful than those in the prior literature: single-pixel and physical backdoors in ImageNet models, backdoors that switch the model to a covert, privacy-violating task, and backdoors that do not require inference-time input modifications.

Our attack is blind: the attacker cannot modify the training data, nor observe the execution of his code, nor access the resulting model. The attack code creates poisoned training inputs "on the fly," as the model is training, and uses multi-objective optimization to achieve high accuracy on both the main and backdoor tasks. We show how a blind attack can evade any known defense and propose new ones.

RECORDING : Please note, this event will be recorded and will be available after the event for an indeterminate period under a CC BY -NC-ND license. Audience members should bear this in mind before joining the webinar or asking questions.

View original pageRecording

01 November 14:00Computational Methods to Measure and Mitigate Weaponized Online Information / Gianluca Stringhini, Boston University

Webinar

The Web has redefined the way in which malicious activities such as harassment, disinformation, and radicalization are carried out. To be able to fully understand these phenomena, we need computational tools able to trace malicious activity as it happens and identify influential entities that carry it out. In this talk, I will present our efforts in developing tools to automatically monitor and model malicious online activities like coordinated aggression, conspiracy theories, and disinformation. I will then discuss possible mitigations against these harmful activities, keeping in mind the potential unintended consequences that might arise from suspending offending users.


Bio:


Gianluca Stringhini is an Assistant Professor in the ECE Department at Boston University, holding affiliate appointments in the Computer Science Department, in the Faculty of Computing and Data Sciences, in the BU Center for Antiracist Research, and in the Center for Emerging Infectious Diseases Policy & Research. In his research Gianluca applies a data-driven approach to better understand malicious activity on the Internet. Through the collection and analysis of large-scale datasets, he develops novel and robust mitigation techniques to make the Internet a safer place. His research involves a mix of quantitative analysis, (some) qualitative analysis, machine learning, crime science, and systems design. Over the years, Gianluca has worked on understanding and mitigating malicious activities like malware, online fraud, influence operations, and coordinated online harassment. He received multiple prizes including an NSF CAREER Award in 2020, and his research won multiple Best Paper Awards. Gianluca has published over 100 peer reviewed papers including several in top computer security conferences like IEEE Security and Privacy, CCS, NDSS, and USENIX Security, as well as top measurement, HCI, and Web conferences such as IMC, ICWSM, CSCW, and WWW.


RECORDING : Please note, this event will be recorded and will be available after the event for an indeterminate period under a CC BY -NC-ND license. Audience members should bear this in mind before joining the webinar or asking questions.

View original pageRecording

26 October 14:00Resilient Machine Learning: A Systems-Security Perspective / Roei Schuster, Cornell Tech

Webinar - link on talks.cam page after 12 noon Tuesday

The security and privacy of ML-based systems are becoming increasingly difficult to understand and control, as subtle information-flow dependencies unintentionally introduced by the use of ML expose new attack surfaces in software. We will first present select case studies on data leakage and poisoning in NLP models that demonstrate this problem. We will then conclude by arguing that current defenses are insufficient, and that this calls for novel, interdisciplinary approaches that combine foundational tools of information security with algorithmic ML-based solutions.

We will discuss leakage in common implementations of nucleus sampling --- a popular approach for generating text, used for applications such as text autocompletion. We show that the series of nucleus sizes produced by an autocompletion language model uniquely identifies its natural-language input. Unwittingly, common implementations leak nucleus sizes through a side channel, thus leaking what text was typed, and allowing an attacker to de-anonymize it.

Next, we will present data-poisoning attacks on language-processing models that must train on "open" corpora originating in many untrusted sources (e.g. Common Crawl). We will show how an attacker can modify training data to "change word meanings" in pretrained word embeddings thus controlling outputs of downstream task solvers (e.g. NER or word-to-word translation), or poison a neural code-autocompletion system, so that it starts making attacker-chosen insecure suggestions to programmers (e.g. to use insecure encryption modes). This code-autocompletion attack can even target specific developers or organizations, while leaving others unaffected.

Finally, we will briefly survey existing classes of defenses against such attacks, and explain that they are critically insufficient: they provide only partial protection, and real-world ML practitioners lack the tools to tell whether and how to deploy them. This calls for new approaches, guided by fundamental information-security principles, that analyze security of ML-based systems in an end-to-end fashion, and facilitate practicability of the existing defense arsenal.

Bio: Roei Schuster is a computer science PhD candidate, advised by Eran Tromer. For the past 4 years, he has been a researcher at Cornell Tech, where he is hosted by Vitaly Shmatikov. Previously, he completed his B.Sc. in computer science at the Technion, and worked as a researcher in the information security industry.

RECORDING : Please note, this event will be recorded and will be available after the event for an indeterminate period under a CC BY -NC-ND license. Audience members should bear this in mind before joining the webinar or asking questions.

View original pageRecording

12 October 14:00Super-Posters in Extremist Forums / Stephane Baele, University of Exeter

Webinar - link on talks.cam page after 12 noon Tuesday

Anecdotal evidence suggests that discussions in extremist forums are overwhelmingly driven by a minority of extremely active contributors or “super-posters”. Using an extensive dataset of 19 extremist forums and 2 non-extremist ones, the present study tests whether these observations generalize across ideologies (far-right, salafi-jihadist, incel, christian fundamentalist) and forum size. Using a two-dimensional methodology to accurately measure the level and nature of super-posting (Gini coefficient and network analysis) delivers key insights on the interaction dynamics structuring extremist forums, and assesses whether this phenomenon is a hallmark of extremism or a by-product of internet forums in general.


RECORDING : Please note, this event will be recorded and will be available after the event for an indeterminate period under a CC BY -NC-ND license. Audience members should bear this in mind before joining the webinar or asking questions.

View original pageRecording

20 July 14:00It's not only the dark web: Distributed surface web marketplaces / Constantinos Patsakis, University of Piraeus

Webinar

Currently, there are various monetisation methods for
cybercriminals who have access to plenty of money laundering schemes
enabling the creation of a huge, underground worldwide economy. A
clear indicator of these activities is online marketplaces which allow
cybercriminals to trade their stolen assets, offer their services, and
sell illegal goods. While traditionally these marketplaces are
available through the dark web, several of them have emerged in the
surface web. In this work, we perform a longitudinal analysis of a
surface web marketplace. The information was collected through
targeted web scrapping that allowed us to identify hundreds of
merchants' profiles for the most widely used surface web marketplaces.
In this regard, we discuss the products traded in these markets, their
prices, their availability, and the exchange currency. This analysis
is performed in an automated way through a machine learning-based
pipeline, allowing us to quickly and accurately extract the needed
information.

RECORDING : Please note, this event will be recorded and will be available after the event for an indeterminate period under a CC BY -NC-ND license. Audience members should bear this in mind before joining the webinar or asking questions.

View original pageRecording

18 May 14:00Apple, App Tracking Transparency, and the Hidden Costs of More Privacy / Sam Gilbert, University of Cambridge

Webinar

Apple's new App Tracking Transparency feature has been lauded for upholding individuals' rights and freedoms. However, there are material economic and social costs to valorizing privacy - some obvious; some less so. This talk by the author of the new book Good Data: An Optimist's Guide to Our Digital Future explores these trade-offs, offering an alternative theory of digital power to "surveillance capitalism".


RECORDING : Please note, this event will be recorded and will be available after the event for an indeterminate period under a CC BY -NC-ND license. Audience members should bear this in mind before joining the webinar or asking questions.

View original pageView slides/notesRecording

04 May 14:00FastPay: High-Performance Byzantine Fault Tolerant Settlement / Alberto Sonnino, UCL

Webinar

FastPay allows a set of distributed authorities, some of which are Byzantine, to maintain a high-integrity and availability settlement system for pre-funded payments. It can be used to settle payments in a native unit of value (cryptocurrency), or as a financial side-infrastructure to support retail payments in fiat currencies. FastPay is based on Byzantine Consistent Broadcast as its core primitive, foregoing the expenses of full atomic commit channels (consensus). The resulting system has low-latency for both confirmation and payment finality. Remarkably, each authority can be sharded across many machines to allow unbounded horizontal scalability. Our experiments demonstrate intra-continental confirmation latency of less than 100ms, making FastPay applicable to point of sale payments. In laboratory environments, we achieve over 80,000 transactions per second with 20 authorities---surpassing the requirements of current retail card payment networks, while significantly increasing their robustness.

RECORDING : Please note, this event will be recorded and will be available after the event for an indeterminate period under a CC BY -NC-ND license. Audience members should bear this in mind before joining the webinar or asking questions.

View original page

27 April 14:00Mix-net cryptoeconomics: Rebooting privacy-preserving communications as incentive-driven collaborative projects / Aggelos Kiayias, University of Edinburgh

Webinar - link in abstract

Applying cryptographic operations in a distributed and collaborative fashion
has been for more than four decades a dominant paradigm in the design
of privacy-preserving communications systems and specifically mix-nets. The
design of such systems present to the end user a guarantee that typically
involves a threshold condition. For instance, such condition could have the form
"as long as x out of a certain set of y cryptographic keys remain
uncompromised then privacy is maintained.” Enforcing the condition
is typically out of scope for the system design and the end-user
has no tools whatsoever to assess whether the condition holds. As a result,
it is typically taken at face value. This is unfortunate as it may lead
to deployments where privacy fails - and to make matters worse,
no-one may even be aware of it.

In this talk, I will present ongoing work (in collaboration with
Claudia Diaz and Harry Halpin) that is pursued in the context of
Nym (https://nymtech.net/), an incentivised mix-network system.
The design is based on new techniques in the context of mix-nets
as well as it suitably adapts concepts first developed in the context
of blockchain protocols, and offers an incentive-driven privacy
preserving communication infrastructure that aims to provide privacy via
a novel "cryptoeconomic” mechanism.


RECORDING : Please note, this event will be recorded and will be available after the event for an indeterminate period under a CC BY -NC-ND license. Audience members should bear this in mind before joining the webinar or asking questions.

View original pageRecording

23 March 14:00Dark web marketplaces and COVID-19: Flexible and resilient / Andrea Baronchelli, City University of London and The Alan Turing Institute

Webinar

This talk focuses on two aspects of dark web marketplaces (DWMs). (1) It investigates how DWMs reacted to COVID-19. The analysis of millions of listings from 102 DWMs identified 788 listings directly related to COVID-19 products and monitor the temporal evolution of product categories including Personal Protective Equipment (PPE), medicines (e.g., hydroxyclorochine), and medical frauds, as well as 33 vaccine listings. In general supply of COVID-19 related goods on DWMs reacts either to shortages in the economy or to public attention, and in particular to misinformation. (2) It clarifies how the DWM ecosystem can be resilient despite the intrinsic weaknesses of individual markets. It analyses 24 separate episodes of unexpected marketplace closure by inspecting 133 million Bitcoin transactions among 38 million users, focusing on “migrating users” who move their trading activity to a different marketplace after a closure. It shows that most migrating users continue their trading activity on a single coexisting marketplace, typically the one with the highest trading volume. User migration is swift and trading volumes of migrating users recover quickly. Thus, although individual marketplaces might appear fragile, coordinated user migration guarantees overall systemic resilience.

REFERENCES
* Dark Web Marketplaces and COVID-19: Before the vaccine. EPJ Data Science 10 (1), 6 (2021)
* Dark Web Marketplaces and COVID-19: The vaccines. Preprint arXiv:2102.05470 (2021)
* Collective dynamics of dark web marketplaces. Scientific Reports 10 (1), 1-8 (2020).


RECORDING : Please note, this event will be recorded and will be available after the event for an indeterminate period under a CC BY -NC-ND license. Audience members should bear this in mind before joining the webinar or asking questions.

View original pageRecording

16 March 14:00Understanding Modern Phishing / Brad Wardman and Adam Oest, PayPal

Webinar

Despite ubiquitous anti-phishing technologies in modern computer systems and extensive mitigation efforts by the security ecosystem, large-scale phishing remains a key threat to Internet users. In this talk, we will explore the current state of phishing attacks to illustrate how criminals operate and better understand the many weaknesses exploited by their attacks. Through the lens of recent anti-phishing research---including a study of how the current pandemic has transformed phishing---we will also discuss future opportunities for better protecting users from phishing and related scams.

RECORDING : Please note, this event will be recorded and will be available after the event for an indeterminate period under a CC BY -NC-ND license. Audience members should bear this in mind before joining the webinar or asking questions.

View original pageView slides/notes

09 March 14:00Deep Learning Assisted Side-Channel Attacks / Elena Dubrova, KTH Royal Institute of Technology

Webinar

Technologies such as deep learning are expected to be the cornerstones of tomorrow's cyber defenses. However, the adversaries are working just as hard as everyone else to turn these technologies to their advantage. In this talk, we will show how deep learning assisted side-channel attacks enable the attacker to break software implementations of Advanced Encryption Standard in USIM cards and Nordic Semiconductor's nRF52832 system-on-chip. We will also present our latest results on power analysis of an ARM Cortex-M4 first-order masked implementation of the Saber key encapsulation mechanism (third round finalist of the NIST post-quantum cryptography standardization process).

View original pageRecording

02 March 14:00Crowdsourcing Security Research: A case study of AdObserver / Laura Edelson and Damon McCoy, New York University

Webinar

In the spring of 2020, we published AdObserver, a browser extension for anonymously crowdsourcing data about ads on social media. Over the past year, 17,000 users have downloaded AdObserver for Chrome or Firefox. During the 2020 US election period, we were able to collect observations of tens of thousands of ads every day. We publicly released the political Facebook ads we were able to identify, including the ad targetings. We present a case study of AdObserver and review initial results. Data collected via our browser extension has been instrumental to our discovery of security vulnerabilities and errors in both Facebook and Google’s systems for ensuring ads comply with platform policies and the law.

We also discuss important considerations for researchers considering collecting data in this way and specific best practices for projects attempting to use browser extensions to crowdsource data while protecting the privacy of users who donate data.

RECORDING : Please note, this event may be recorded and may be available after the event for an indeterminate period under a CC BY -NC-ND license. Audience members should bear this in mind before joining the webinar or asking questions.

View original pageView slides/notesRecording

23 February 14:00Dmitry goes to Hollywood: Criminal Excellence in (Cyber) La La Land / Luca Allodi, Eindhoven University of Technology

Webinar

Cyber-criminals and attackers at large have access to a wide range of technologies and techniques of varying sophistication to deliver attacks: from script-kiddie types of attacks employing automated and well-known exploits, to mature malware delivery platforms capable of crypting or packing malware at delivery time, and multi-stage, highly tailored social engineering attacks employing a large portfolio of targeting and psychological techniques. Yet, most cyber-criminal ventures are relatively un-interesting: dozens of underground market places exists, but which of those support technological innovation rather than mainly scam-for-scammers activities is currently hard to know. Similarly, yet another "Your mailbox is full, please click here to reset your password" phishing attack hardly makes the news, while we lack the tools to characterize much more sophisticated and innovative social engineering attacks targeting, for example, specific individuals across multiple attack stages.

In this talk we discuss what features characterize "cyber-criminal excellence", and distinguish it from "ordinary" Internet crime. Reflecting current attack trends, we focus on criminal markets and social engineering techniques: within both domains, we propose and discuss models and criteria to characterize relevant and highly innovative criminal ventures and sophisticated social engineering attacks which ought to be studied and understood, and showcase their application through real-world case studies.

RECORDING : Please note, this event will be recorded and will be available after the event for an indeterminate period under a CC BY -NC-ND license. Audience members should bear this in mind before joining the webinar or asking questions.

View original pageView slidesRecording

16 February 14:00A Liar and a Copycat: Nonverbal Coordination Increases with Lie Difficulty / Sophie van der Zee, Erasmus University Rotterdam

Webinar

Nonverbal coordination is the tendency to imitate the behaviors of others. Coordination can take place both on a conscious and a more unconscious or automatic level. How much people coordinate with their interaction partner, depends on several factors, including liking and common goals. There is some evidence that the coordination occurrence is also affected by cognitive load. So far, this has only been demonstrated in isolated body part movement. A forensically relevant setting that is strongly associated with increased cognitive load is deception. Lying, especially when fabricating accounts, can be more cognitively demanding than truth telling. In two studies, we demonstrate that interactional nonverbal coordination increases under the cognitive load of lying. Nonverbal coordination is an especially interesting cue to deceit because its occurrence relies on automatic processes and is therefore more difficult to deliberately control. Our findings complement current deception research into the liar’s nonverbal behavior by explicitly considering the interaction with the interviewer. Our findings extent the current literature on increased reliance on automated processes by demonstrating that nonverbal coordination can be such an automated process that is affected by increased cognitive load. The use of motion capture technology provides a novel, objective and efficient means of measurement.

RECORDING : Please note, this event will be recorded and will be available after the event for an indeterminate period under a CC BY -NC-ND license. Audience members should bear this in mind before joining the webinar or asking questions.

View original pageRecording

09 February 14:00Cybersecurity Risk to Hospitals from Building Services / Sheryn Gillin, University of Cambridge

Webinar

Human error and the vulnerability of clinical devices are perceived as the foremost cybersecurity risks in the critical infrastructure sector of Healthcare; this limited view, however, overlooks the possible disruption due to a malicious actor accessing the network or systems that maintain the environmental conditions within a healthcare facility. Operating theatres, laboratories, pharmacies, sterile stores and imaging equipment have stringent environmental requirements. To achieve these conditions, chilled water and ventilation must function within set tolerances; any divergence could significantly impact a hospital due to cancelled surgeries or diagnostic procedures, an MRI quench or the need to dispose of sterile products. Focussing on the systems required to maintain the environment within these specialist rooms, the vulnerabilities, threats, risks and impacts were investigated using based on four case study hospitals in Canada and the UK.

RECORDING : Please note, this event will be recorded and will be available after the event for an indeterminate period under a CC BY -NC-ND license. Audience members should bear this in mind before joining the webinar or asking questions.

View original page

26 January 16:00Towards Provable Physical Safety Against False Actuation Attacks in CPS / Alvaro Cardenas, University of California, Santa Cruz

Webinar

The vulnerability of cyber-physical systems (CPS) is a growing area of concern, and in the past decade, researchers have proposed a variety of security defenses for these systems. Most of these proposals are heuristic in nature, and while they increase the protection of their target, the security guarantees they provide are unclear. In this talk we discuss two different approaches for modeling the security guarantees of a cyber-physical system against arbitrary false command attacks. The first part of the talk discusses the idea of providing physical protections by saturating actuators, and the second part of the talk discusses how to use barrier certificates to prove safety of a real-world system. Our work is an effort to move forward CPS security research towards precise definitions, precise claims, and provable security.

RECORDING : Please note, this event may be recorded and may be available after the event for an indeterminate period under a CC BY -NC-ND license. Audience members should bear this in mind before joining the webinar or asking questions.