Department of Computer Science and Technology

Security Group

2018 seminars

Expand all Collapse all

View original page

04 December 16:00Function-Based Access Control / Yvo G. Desmedt, Department of Computer Science, University of Texas at Dallas

FW11, Computer Laboratory, William Gates Building

Inspired by Functional Encryption, we introduce Function-Based Access Control (FBAC). From an abstract viewpoint, we suggest storing access authorizations as a three-dimensional tensor, or an Access Control Tensor (ACT) rather than the two-dimensional Access Control Matrix (ACM).

In FBAC, applications do not give blind folded execution right and can only invoke commands that have been authorized for function defined data segments. So, one might be authorized to use a certain command on one object, while being forbidden to use the same command on another object. Such behavior can not be efficiently modeled using the classical access control matrix or achieved efficiently using cryptographic mechanisms.

--

SHORT BIO

Yvo Desmedt is the Jonsson Distinguished Professor at the University of Texas at Dallas, a Honorary Professor at University College London, a Fellow of the International Association of Cryptologic Research (IACR) and a Member of the Belgium Royal Academy of Science. He received his Ph.D. (1984, Summa cum Laude) from the University of Leuven, Belgium.

He held positions at: Universite de Montreal, University of Wisconsin-Milwaukee (founding director of the Center for Cryptography, Computer and Network Security), and Florida State University (Director of the Laboratory of Security and Assurance in Information Technology, one of the first 14 NSA Centers of Excellence). He was BT Chair and Chair of Information Communication Technology at University College London. He has held numerous visiting appointments. He is the Editor-in-Chief of IET Information Security and Chair of the Steering Committees of CANS. He was Program Chair of e.g., Crypto 1994, the ACM Workshop on Scientific Aspectsof Cyber Terrorism 2002, and ISC 2013.

He has authored over 200 refereed papers, primarily on cryptography, computer security, and network security. He has made important predictions, such as his 1983 technical description how cyber could be used to attack control systems (realized by Stuxnet), and his 1996 prediction hackers will target Certifying Authorities (DigiNotar was targeted in 2011).

View original pageView slides/notes

27 November 14:00The ideal versus the real: A brief history of secure isolation in virtual machines and containers / Allison Randal, Computer Laboratory, University of Cambridge

LT2, Computer Laboratory, William Gates Building

The common perception in both academic literature and and the industry today is that virtual machines offer better security, while containers offer better performance, however a detailed review of the history of these technologies and the current threats they face reveals a very different story.

This talk is an early preview of a survey paper covering key developments in the history of virtual machines and containers from the 1950's to today, with an emphasis on shattering myths and seeking a viable path forward for secure isolation in large-scale multitenant deployments such as cloud and containers.

View original page

13 November 14:00Displacing big data: How cybercriminals cheat the system / Alice Hutchings, Computer Laboratory, University of Cambridge

LT2, Computer Laboratory, William Gates Building

Many technical approaches for detecting and preventing cybercrimes use big data and machine learning, drawing upon knowledge about the behaviour of legitimate customers and indicators of cybercrime. These include fraud detection systems, behavioural analysis, spam detection, intrusion detection systems, anti-virus software, and denial of service attack protection. However, criminals have adapted their methods in response to big data systems.

I will present case studies for a number of different cybercrime types to highlight the methods used for cheating such systems. I will argue that big data solutions are not a silver bullet approach to disrupting cybercrime, but rather represent a Red Queen’s race, requiring constant running to stay in one spot.

View original page

06 November 14:00You think you're not a target? A tale of three developers / Chris Lamb, Debian Project Leader

FW26, Computer Laboratory, William Gates Building

If you develop or distribute software of any kind, you are vulnerable to whole categories of attacks upon yourself or your loved ones. This includes blackmail, extortion or "just" simple malware injection. By targeting software developers such as yourself, malicious actors, including nefarious governments, can infect and attack thousands — if not millions — of end users. How can we avert this?

The idea behind "reproducible" builds is to allow verification that no flaws have been introduced during build processes; this prevents against the installation of backdoor-introducing malware on developers' machines, ensuring attempts at extortion and other forms of subterfuge are quickly uncovered and thus ultimately futile.

Through a story of three different developers, this talk will engage you on this growing threat to you and how it affects everyone involved in the production lifecycle of software development, as well as how reproducible builds can help prevent against it.

View original page

30 October 14:00The Guardian Council: Using many-core architectures to support programmable hardware security / Sam Ainsworth, Computer Laboratory, University of Cambridge

LT2, Computer Laboratory, William Gates Building

Computer security is becoming more challenging in the face of untrusted programs and system users, and safeguards against attacks currently in use, such as buffer overflows, rowhammer, side channels and malware, are limited. Software protection schemes are often too expensive, and hardware schemes too constrained or out-of-date to be practical. We propose that the necessary solution is a hybrid: a programmable security architecture implemented on chip, using dedicated hardware channels for analysis information, and software units for entire-program dynamic analysis.

The key insight that makes this practical in a modern setting is silicon scaling trends allowing a very large amount of computation at very low power and area overheads, provided this computation can be parallelized. We therefore use a set of highly parallel, small Guardian Processing Elements as part of an architecture designed to support powerful programmable security at very low cost.

We use this system to design and evaluate implementations of a wide variety of hardware and software protection mechanisms with low power, performance and area overheads.

View original page

16 July 15:00Combating a hydra: Islamic State's digital jihad as a threat to international security / Miron Lakomy, Department of International Relations, University of Silesia

LT2, Computer Laboratory, William Gates Building

This presentation has three major goals. To begin with, it will discuss the history of digital jihad since the mid-1990s (including cases of Chechen Islamist militias, Hezbollah, Hamas).
Secondly, it will provide an overview of current forms and means of the Islamist terrorist propaganda in the Internet, with the special emphasis put on video productions (reports, beheadings, combat footage, music videos), music (nasheed chants), online magazines, video games, as well as other types of visual content (infographics, images, banners, CGI). Finally, it will discuss the most important features of the Islamic State's cyber jihad, as well as its implications for international security and means of combating it.

View original page

22 May 14:00Detecting Spies in Sensor-Rich Environments using Cyber-Physical Correlation / Brent Lagesse, University of Washington Bothell (visiting Cambridge as a Fulbright Scholar)

LT2, Computer Laboratory, William Gates Building

The emerging ubiquity of devices with monitoring capabilities has resulted in a growing privacy concerns. This work addresses the challenge of automatically identifying devices that are streaming privacy-intruding information about a user. The work includes a framework for inducing a signal in the physical world and then detecting its digital footprint when devices are monitoring the user. The approach only requires the user to have a device with the ability to enter into network monitor mode. The techniques described work regardless of if the camera uses encryption or is even on the same network as the device. The effectiveness of this approach has been demonstrated through analyzing over 15 hours of network traffic. The technique detected over 90% of the hidden cameras across a variety of physical environments while producing less than 6% false positives within 30 seconds of the camera beginning to stream recordings

* http://www.brentlagesse.net
* https://faculty.washington.edu/lagesse/publications/HiddenSensorDetection.pdf

View original page

15 May 14:00Ethical Issues in Network Measurement / Shehar Bano, Dept. of Computer Science, University College London (UCL)

LT2, Computer Laboratory, William Gates Building

Sound science and evidence-based decision making hinge on empiricism - insights derived from rigorous measurements. As our lives increasingly depend on digital devices and online communication, a diverse set of domains ranging from human behaviour studies, through malware analysis, to characterisation of information controls make use of data collected via network measurements. Such measurements introduce new challenges for ethical research due to their scale, speed of information dissemination, indirect interaction and complex dependencies between entities, opacity, and so forth. The research community has made efforts such as the Menlo Report to re-contextualise and extend existing ethical guidelines (e.g., the 1979 Belmont Report widely used in the biomedical and behavioural sciences). However, apt interpretation of these guidelines against the backdrop of an evolving technical, legal and social landscape is far from simple.

In this talk, I will walk the audience through a selection of network measurement case studies, with the goal to highlight ethical challenges, the choices made, and possible alternatives. This will mostly be an interactive discussion, and will include (at least some of) the following topics:

* Is approval from Institutional Review Board (IRB) / Research Ethics Board (REB) enough?
* Do users with different technical abilities share similar interpretation of risks?
* Do the same ethical guidelines apply to measurement process, and the results generated?
* Do ethical guidelines apply to pre-existing data?
* What about pre-existing data that was obtained by illicit means?
* What about co-opted measurements (getting third-parties to indirectly generate measurements towards the targets)?
* When to obtain consent (when is post facto consent ok)?
* Is what’s deemed legal also ethical?
* Does legality of method imply safety for implicated subjects?
* Should academia use higher standards for performing measurement research?

===
Bio
===

(Shehar) Bano is a postdoctoral researcher in the Information Security Research Group at University College London. Her research interests centre on networked systems, particularly in the context of security and measurement. Currently, she is working on the design, scalability and applications of blockchains with George Danezis and Sarah Meiklejohn as part of the EU DECODE and EPSRC Glass Houses projects. She is a member of IC3 (Initiative for Cryptocurrency and Contract) and UCL Centre for Blockchain Technologies (CBT).

Network measurements form a key part of Bano’s research work---ranging from low-layer network phenomena such as characterization of IP liveness via active Internet scans, to understanding online ecosystems in the context of security and information control. She received her Ph.D. degree ("Characterization of Internet Censorship from Multiple Perspectives") from the University of Cambridge in 2017 under the supervision of Prof. Jon Crowcroft (and co-supervised by Dr. Steven Murdoch, Prof. Vern Paxson, and Prof. Ross Anderson) where she was an Honorary Cambridge Trust Scholar, and was awarded the Mary Bradburn Scholarship by the British Federation of Women Graduates for her research work. Previously she worked on Intrusion Detection Systems (Bro), and wrote an open-source software for botnet detection (BotFlex). She interned at ICSI, UC Berkeley in 2012 and 2013. She received her Master's degree in Computer and Communication Security from National University of Science and Technology, Pakistan in 2013 for which she was awarded the President's Gold Medal. Her work has been published in the Network and Distributed System Security Symposium, the ACM Internet Measurement Conference, the Symposium on Privacy Enhancing Technologies, SIGCOMM CCR, WWW, and other well-respected venues.

View original page

08 May 14:00Data science approaches to understanding key actors on online hacking forums / Sergio Pastrana/Andrew Caines, Computer Laboratory, University of Cambridge

LT2, Computer Laboratory, William Gates Building

Underground forums contain many thousands of active users, but the vast majority will be involved, at most, in minor levels of deviance. The number who become engaged in serious criminal activity is small. That being said, underground forums have played a significant role in several recent high-profile cybercrime activities. We have compiled a massive dataset, dubbed CrimeBB, by crawling and scraping an assortment of online forums. The dataset presents a unique opportunity to understand these communities at scale, and allows for longitudinal social data analysis. Manual analysis is infeasible, and the complexity of these forums, and the unique lexicon used, makes automatic analysis challenging. In this talk we will describe the data collection and present preliminary results obtained in the scope of an interdisciplinary project, where we apply various data science methods to analyse the data. Concretely we apply social network analysis to analyse their social interests, natural language processing to classify the type of information posted and clustering to group the actors based on forum activity.

View original page

27 March 14:00Wireless Physical-layer Security: Fundamentals and Jamming with Coding for Secrecy / João P. Vilela, Informatics Engineering, University of Coimbra

LT2, Computer Laboratory, William Gates Building

The resurgence of physical-layer security, after early contributions from the seventies stemming from information-theoretic security concepts, is tied to recent advances on wireless networks. While some works have looked at the effect of intrinsic wireless phenomena, such as fading, on the secrecy level of these networks, other works consider active approaches whereby cooperative users (e.g. relays or jammers) are used to improve security. In particular, otherwise silent devices can be selected as friendly jammers to improve secrecy, by causing interference to adversaries. In this talk, after introducing the fundamentals of wireless physical-layer security, we present techniques for selecting jammers with security concerns and show how to combine those jamming techniques with coding methodologies to amplify the effect of jamming for secrecy, while reducing the associated energy cost.

Bio:
João P. Vilela is an assistant professor at the Department of Informatics Engineering of the University of Coimbra and a senior researcher at the Laboratory of Communications and Telematics of the Centre for Informatics and Systems of the University of Coimbra. He received the Ph.D. in Computer Science in 2011 from the University of Porto, Portugal, period during which he was visiting the Coding, Communications and Information Theory group at Georgia Tech, working on physical-layer security, and the Network Coding and Reliable Communications group at MIT, working on security for network coding. In recent years, Dr. Vilela has been coordinator and team member of several national, bilateral, and European-funded projects in security and privacy of computer and communication systems, with focus on wireless networks, mobile devices and cloud computing. Other research interests include anticipatory networks and intelligent transportation systems.

View original page

20 February 14:00Large Scale Ubiquitous Data Sources for Crime Prediction / Cristina Kadar, ETH Zurich

LT2, Computer Laboratory, William Gates Building

In this talk, I will present two approaches to geographical crime profiling that leverage machine learning techniques and large scale ubiquitous data sources. I will briefly touch on their motivation in criminology and urban studies, as well as on their challenges and limitations.

The first work mines large-scale human mobility data to craft an extensive set of features for yearly crime counts prediction in New York City. Traditional crime models based on census data are limited, as they fail to capture the complexity and dynamics of human activity. With the rise of ubiquitous computing, there is the opportunity to improve such models with data that make for better proxies of human presence in cities. Our study shows that spatial and spatio-temporal features derived from Foursquare venues and checkins, subway rides, and taxi rides, improve the baseline models relying only on census data. The proposed ensemble machine learning models achieve absolute R2 metrics of up to 65% (on a geographical out-of-sample test set) and up to 89% (on a temporal out-of-sample test set). This proves that, next to the residential population of an area, the ambient population there is strongly predictive of the area’s crime levels. We deep-dive into the main crime categories, and find that the predictive gain of the human dynamics features varies across crime types: such features bring the biggest boost in case of grand larcenies, whereas assaults are already well predicted by the census features. Furthermore, we identify and discuss top predictive features for the main crime categories. These results offer valuable insights for those responsible for urban policy.

The second work investigates a forecasting approach for daily burglary risk within a region of Switzerland characterized by significantly lower levels of urbanization compared to the areas analyzed in prevailing crime prediction research. The lower levels of urbanization in combination with high spatial and temporal granularity pose a significant challenge to building accurate prediction models necessary to derive feasible and effective preventive actions, e.g. in form of police patrols. We employ machine learning methods, which allow for integration of diverse fine-grained data on the demographic, geographic, economic, temporal, and meteorological characteristics of the environment, next to past burglary events. We propose an approach which addresses the sparsity of the data and significantly outperforms the baseline implementation of a prospective hotspot model, which only makes use of historical crime data and is an industry standard. For instance, by setting the coverage of the predicted areas to 5% of the total studied area, the model is able to predict the committed burglaries on a specific day within a four-hectare rectangular area with an average hit ratio of 57% compared to the 36% hit ratio of the baseline. This research has direct implications for decision makers in charge of resource allocation for crime prevention.

Bio:
Cristina is a PhD candidate at the Department of Management, Technology, and Economics (D-MTEC) of the Swiss Federal Institute of Technology in Zurich (ETH Zurich). She holds a M.Sc. with Honors in Software Engineering from Technical University of Munich and a B.Sc. in Computer Science from Leibniz University of Hanover. Her research interests evolve around information systems, computational social science, and applied machine learning, with a focus on crime and fear of crime.

View original page

06 February 15:00Psychological predictors of risky online behaviour: The cases of online piracy and privacy / Piers Fleming, School of Psychology, University of East Anglia

LT2, Computer Laboratory, William Gates Building

Despite the real world implications of online behaviour our online actions remain highly malleable with respect to our stated preferences or contextual factors. We often act inconsistently with our own past behaviour or attitudes based upon the situation. This talk examines strong and weak predictors of piracy and privacy behaviours based on survey and experimental studies. Risk perceptions, values and personality differences are considered.

View original page

30 January 14:00Anonymity in Cryptocurrencies / Sarah Meiklejohn, Information Security Group, University College London (UCL)

LT2, Computer Laboratory, William Gates Building

A long line of recent research has demonstrated that existing cryptocurrencies often do not achieve the level of anonymity that users might expect they do, while at the same time another line of research has worked to increase the level of anonymity by adding new features to existing cryptocurrencies or creating entirely new cryptocurrencies. This talk will explore both of these lines of research, demonstrating both de-anonymization attacks and techniques for anonymity that achieve provably secure guarantees.

Bio:

Sarah Meiklejohn is a Reader in Cryptography and Security at University College London. She has broad research interests in computer security and cryptography, and has worked on topics such as anonymity and criminal abuses in cryptocurrencies, privacy-enhancing technologies, and bringing transparency to shared systems.

View original page

23 January 14:00The impact of cybercrime on businesses: A new conceptual framework and its application to Belgium / Letizia Paoli, Faculty of Law, University of Leuven

LT2, Computer Laboratory, William Gates Building

Despite growing indications and fears about the impact of cybercrime, only few academic studies have so far been published on the topic to complement those published by consultancy firms, cybersecurity companies and private institutes. The review of all these studies shows that there is no consensus on how to define and measure cybercrime, or assess its impact.

Against this background, this article pursues two aims: 1) to develop a well-thought conceptual framework to define and operationalize cybercrime affecting businesses as well as its impact, harms, and costs; and 2) to test this conceptual framework with a survey of businesses based in Belgium, which was administered in summer 2016 and elicited 310 valid responses.

Consisting of five types, our conceptualization of cybercrime is, unlike others, technology-neutral and fully compatible with the legislation. Drawing on Greenfield and Paoli’s (2013), we understand impact as the overall harm of cybercrime, that is, the “sum” of the harms to material support, or costs, and the harms to three other interest dimensions, i.e., functional (i.e., operational) integrity, reputation and privacy. Whereas we ask respondents to provide a monetary estimate of the costs, respondents are invited to rate the severity of the harms on the basis of an ordinal scale.

This “double track” might give a fuller assessment of cybercrime impact. Whereas most affected businesses do not report major costs or harm, 15% to 20% of them rate the harms to their internal operational activities as serious or more, with cyber extortion regarded as most harmful.