Stephen Kell

About Me

Photo by Ohad Barzilay. Thanks, Ohad.

I'm a postdoctoral Research Associate at the Computer Laboratory at the University of Cambridge. I'm working within the REMS project. Within the Laboratory, I participate in both the Programming, Logic and Semantics group and the Systems Research Group (particularly its Networks and Operating Systems subgroup).

Contact information is at the bottom of this page.

I am a practical computer scientist with wide interests. My goal is to make it easier and cheaper to develop useful, high-quality software systems. So far, my work has mostly focused on programming languages and the systems that support them—including language runtimes and operating systems.

Most of my research concerns programming, but I identify as a “systems” researcher. For me, “systems” is a mindset rather than a research topic. It means I am primarily interested in the practical consequences of an idea; its abstract properties are of interest only subserviently. (If you can come up with a better definition of the “systems mindset”, let me know.) I generally hesitate to identify as a “programming languages” researcher, not because I dislike languages but because they're only one piece of the puzzle. I'd identify as a “programming systems” researcher if that were a thing.

In recent years since my PhD, I've spent various spells working for Oracle Labs, the University of Lugano, and the Univeristy of Oxford. See my biography section for more.

You can read more below about what I'm working on. A recurring theme is that I prefer pushing innovations into the software stack at relatively low levels, to the extent that this is sensible (which I believe can be surprising). Another is the pursuit of infrastructure that is well-integrated, observable, simple, flexible, and designed for human beings—particularly if we believe (as I do) that computers are for augmenting the human intellect, not just for high-performance data crunching.

Micro-blog and calendar

Recent blog entries

(all entries)

Very brief biography

I did my Bachelor's degree in computer science in Cambridge, graduating in 2005. I then stayed on for a year as a Research Assistant, before starting my PhD in 2006.

My PhD work was based in the Networks and Operating Systems group under the supervision of Dr David Greaves, and centred on the problem of building software by composition of ill-matched components, for which I developed the Cake language.

During summer 2007 I took an internship at Fraser Research, doing networking research.

From January 2011 until March 2012 I was a research assistant in the Department of Computer Science at the University of Oxford. There I worked as a James Martin Fellow, within the research programme of the Oxford Martin School's Institute for the Future of Computing. My work mostly focused on constucting a continuum between state-space methods of program analysis (notably symbolic execution) with syntactic methods (such as type checking). (This work was rudely interrupted by fate, but will be revived eventually.)

From May 2012 to May 2013, I was a postdoctoral researcher at USI's Faculty of Informatics in Lugano, Switzerland, within the FAN project.

From May until October 2013 I was temporarily a Research Assistant at Oracle Labs, on the Alphabet Soup project.

In my industrial life, I enjoyed some spells working for Opal Telecom and ARM around my Bachelor's studies. More recently, I have been consulting for Ellexus, and conducted some industrial research work (in reality rather development-heavy) for Oracle Labs.

You can find a version of my CV here, though be aware that it may be a little rough or incomplete at times. If you're recruiting for jobs in the financial sector, even technical ones, please don't bother reading my CV or contacting me—I promise I am not interested.

Research

Current work

Here's what I'm working on as of April 2016. In most cases these are my own individual projects; a few are with collaborators, as I describe.

Making the Unix process environment more descriptive and reflective

Unix-like processes are with us to stay, but are impoverished in certain ways. Notably, they lack meta-level facilities for describing user-supplied abstractions such as memory layouts, procedural interfaces, file formats, inter-process protocols, and so on. This lack severely limits the feasibility of building tools and systems which automate or abstract meta-level programming tasks, such as debugging, profiling, visualisation, instrumentation and composition. In turn, this massively hampers software development generally! To fix this, rather than designing meta-level facilities in the context of a whole new system, as has been done in Smalltalk and Lisp programming systems, my work chooses instead to evolve the services offered by existing Unix-like systems. Since meta-level tasks are far from uncommon, vestigial features to help carry them out can often be found “lurking” in Unix-like systems. By extending and evolving these features, my work can (only half-jokingly) be said to be “dragging Unix into the 1980s”. It does so while maximising compatibility, allowing new benefits to be applied to old code, in whatever language—the work spans C and C++ right up to JavaScript, Python and OCaml. One output of this work is liballocs, a run-time library fast whole-process tracking of per-allocation metadata, including detailed type information. Another is libdlbind, which helps make JIT-compiled code visible to debuggers and dynamic loaders. Both of these are building blocks used by my other research efforts (below). Motto: if you must unify, unify at the meta-level of the operating system, not at the base language level or in language implementations.

Unsafe languages made safer and more debuggable

Huge amounts of code have been written, and continue to be written, in unsafe programming languages including C, C++, Objective-C, Fortran and Ada. Performance is one motivator for choosing these languages; continuity with existing code, particularly libraries, is another. The experience of compiling and debugging using these languages is a patchy one, and often creates large time-sinks for the programmer in the form of debugging mysterious crashes or working around tools' limitations. I have developed the libcrunch system for run-time type checking and robust bounds checking. This consists of a runtime library based on liballocs, together with compilers wrappers and other toolchain extensions for instrumenting code and collecting metadata. Slowdowns are typically in the range 0–35% for type checking and 15–100% for bounds checking. The system is also unusually good at supporting use of uninstrumented libraries, continuing after error, and supporting multiple languages. Its design is a nice illustration of an approach I call toolchain extension: realising new facilities using a combination of source- and binary-level techniques, both pre- and post-compilation, together with the relevant run-time extensions. Critically, it re-uses the stock toolchain to the greatest possible extent, and takes care to preserve binary compatibility with code compiled without use of these extensions. The mainline implementation uses the CIL library; during 2014—15, Chris Diamand developed a Clang front-end, with help from David Chisnall. Motto: unsafe languages allow mostly-safe implementations; combine compile- and run-time techniques to make this the pragmatic option

Rigorous specification of system services

The semantics of programming languages are well studied. Two equally crucial but much less-studied areas are the semantics of linking and (separately) of system calls. The challenges of verified toolchains and verified run-time environments depend critically on accurate specifications for these services. With my colleagues on the REMS project, we're working both on detailed specifications of ELF linking and Linux system call interface. This also means developing tools for experimentally testing and refining these specifications. In summer 2015 Jon French developed tools for specifying system calls in terms of their memory footprints. Motto: a specification for every instruction.

Principled approaches to debugging information

Debugging native code often involves fighting the tools. Programmers must trust to luck that the debugger can walk the stack or print values; when it fails, they must work with incomplete or apparently impossible stack traces, “value optimized out” messages, and the like. Obtaining debugging insight is even harder in production environments where unreliable infrastructure risks both downtime and exploits (see Linus Torvalds in his thread, among others). With REMS colleagues and Francesco Zappa Nardelli, we're working on tools and techniques for providing a robust, high-quality debugging experience. This comes in several at its simplest, it means sanity-checking compiler-generated debugging information. At its most complex, it means propagating more information across compiler passes, to enable debugging information that is correct by construction. Debugging is also in tension with compiler optimisation, and is also inherently in need of sandboxing (to limit who can consume what application state). Motto: if it compiles, it should also debug.

As of April 2016, this work is at an early stage.

Cross-language programming without FFIs

Language barriers are holding back programming. They splinter the ecosystem into duplicated fragments, each repeating work. Foreign function interfaces (FFIs) are the language implementers' attempt at a solution, but do not serve programmers' needs. FFIs must be eliminated! Instead, using a descriptive plane (again, provided by liballocs), we can make higher-level language objects appear directly within the same metamodel that spans the entire Unix process, alongside lower-level ones, which are mapped into each language's “object space”. For example, in my hacked nodeJS (quite a bit of V8 hacking was required too), all native functions visible to the dynamic linker are accessible by name under the existing process object. You're no longer dependent on hand-written C++ wrapper code, nor packages like node-ffi to access these objects. The type information gathered by liballocs enables direct access, out-of-the-box. It's far from a robust system at the moment, but the proof-of-proof-of-concept-concept is encouraging. Motto: FFIs are evil and must be destroyed, but one VM need not (cannot) rule them all.

Comprehensive, easy-to-use instrumentation on managed platforms

Programmers often want to develop custom tooling to help them profile and debug their applications. On virtual machines like the JVM, this traditionally means messing with low-level APIs such as JVMTI, or at best, frameworks like ASM which let you rewrite the application's bytecode. In Lugano, work done together with my DAG colleagues proposed a new programming model for these tasks based on aspect-oriented primitives (join points and advice), and also a yet higher-level model based on an event processing abstraction reminiscent of publish-subscribe middleware. On the run-time side, our infrastructure addresses the critical problem of isolation: traditionally, instrumentation and application share a virtual machine, and may interfere with one another in ways that are difficult to predict and impossible, in general, to avoid except by arbitrarily excluding shared classes (such as java.lang.* and the like). Our work gets around this by avoiding the very concept of bytecode instrumentation—instead we perform instrumentation using a straight-to-native path that hands off run-time events to a separate process (the “shadow VM”). This allows much stronger coverage properties than are possible with bytecode instrumentation, and nicely handles distributed applications, while also bringing challenges with event synchronisation and ordering. Motto: robust, high-coverage without bytecode munging or hacky exclusion lists.

Efficient data structures for address-associative mappings

As a by-product of my work on reflective metadata in Unix processes, I've built various data structures for fast indexing of in-memory structures by their virtual address. The key idea is to use virtual address translation itself as the first stage of any lookup. This approach has its roots in folklore, but some of the extensions and variants I've built seem to be novel. Combined, they add a complement of abilities including support for both wide and narrow range queries, iteration, and efficiency under differing degrees of sparseness and clusteredness in the key-space usage. These structures are pleasingly simple to code, and have been known to offer significant out-of-the-box performance improvements relative to more complex hand-rolled structures. Their overall performance characteristics are, however, complex and non-obvious. A personal research project of mine is to explore, measure characterise these characteristics. Motto: virtual memory is hardware-assisted associative lookup.

This work is at an early stage.

Radically easier composition

Programming by composition is the norm, but the norm also imposes needlessly high costs on developers. Keeping up with APIs as they evolve is a familiar drain on programmers' time. Inability to migrate to a different underlying library or platform is a familiar inhibitor of progress. It doesn't need to be this way. Why isn't software soft, malleable and composable? Why is so much tedious work left to the human programmer? My PhD work explored an alternative approach in which software is composed out of units which aren't expected to match precisely. A separate bunch of languages and tools, collectively called an “integration domain”, are provided to tackle the details of resolving this mismatch. I designed the language Cake to provide a rule-based mostly-declarative notation for API adaptation. The Cake compiler accepts a bunch of rules relating calls and data structure elements between APIs, including in various context-sensitive fashions. The Cake compiler then generates (nasty C++) wrapper code, saving the programmer from writing wrappers by hand. Motto: down with manual glue coding, down with silos!

What is that?

A recurring theme of my work is that I like to ask big questions, including questions of the form “what is X?” (for various X). The X so far have been the following.

Published articles

See also my author page on DBLP (with much credit to Michael Ley for this superb service).

Peer-reviewed abstracts, tool demonstrations and similar

Manuscripts, reports, dissertations etc.

Talks

Interests

My research interests are outlined at the top of this page. In summary: although broad, they fall mostly within the intersections of systems, programming languages and software engineering.

I keep a calendar of approximate submission deadlines and event dates for most of the conferences and workshops that I might conceivably attend, together with a few for which it's verging on inconceivable. In case it's useful, here it is.

PhD work

For my PhD I worked on supporting software adaptation at the level of the operating system's linking and loading mechanisms. Here “adaptation” means techniques for connecting and combining software components which were developed independently, and therefore do not have matched interfaces. My emphasis was on techniques which are practical, adoptable and work with a wide variety of target code, at the (hopefully temporary) expense of safety and automation.

The main focus of my work was Cake, a special-purpose language for describing relations between the interfaces of binary components (specifically, relocatable object code). Cake makes heavy use of DWARF debugging information, and can be considered interesting in several ways: as a domain-specific rule-based programming language; as a “composition”, “configuration” or “linking” language; as a dynamic language; as a runtime system sharing commonalities with garbage collectors and debuggers. It does not really make contributions in the domains of module systems or linking models.

To find out more, please browse my publications, and do contact me for more information. There will hopefully be one or two more papers appearing on additional work that I did during my PhD years. My dissertation is now available too.

Support and acknowledgements

I'm very grateful to EPSRC and Cambridge Philosophical Society for the funds which supported my PhD research work and some related travel, and to the Graduate Research Fund and Emily & Gordon Bottomley Fund of Christ's College, EuroSys, The Royal Academy of Engineering, ACM SIGSOFT and ACM SIGPLAN for additional support of my research travel and conference attendance.

Local activities

In Cambridge, I coordinated the NetOS group talklets from January 2009 until January 2010. I also had a librarian-like role of curating a small group library and keeping a very vague track of the various books we had lying around (local users: see /usr/groups/netos/library). Finally, I looked after (in a rather neglectful fashion) the Atlas Room BBC Micro (about which I should write more some time).

I'm a Fellow of the Cambridge Philosophical Society.

As of May 2015, I am a postdoctoral affiliate at Christ's College (of which I'm also an alumnus, from my BA and PhD days).

Professional activities

I'm a member of the ACM, ACM SIGSOFT, SIGPLAN and SIGOPS.

In 2015 I have been on the PC for Onward! Essays and a reviewer for ACM TACO.

In the recent past I have been an external reviewer for the ASE journal, on the programme committee for PPPJ 2013, publicity chair for SC 2013, on the programme committee for RESoLVE 2012 at ASPLOS, and the shadow programme committee for Eurosys 2012.

Previously, I was privileged to contribute external reviews to SVT at SAC 2012, ESOP 2010 and EuroSys 2009.

Students

For now, my research “leadership” is confined to the student projects I've been known to supervise, which are in the Teaching section. I am always interested in working with enthusiastic Bachelor's, Master's and doctoral students. I maintain a list of project ideas, and am always happy to talk about other ideas.

History

During 2005–06 I was a Research Assistant in Cambridge on the XenSE and Open Trusted Computing projects, under Steven Hand. Both projects seek to implement a practical secure computing platform, using virtualisation (and similar technologies) as the isolation mechanism.

XenSE never had a web page of its own, but you might want to look at the abstract on the project's EPSRC Grant Portfolio page, or check out the mailing list.

OpenTC is a large EU-funded project involving many major industrial and academic partners, focused on the use of Trusted Computing Group technology to realise many common secure computing use cases.

As part of my work as an RA, I became interested in secure graphical user interfaces including L4's Nitpicker, a minimal secure windowing system. I began work on ports of this system to Linux, XenoLinux and the Xen mini-OS: the Linux version became mostly functional (but not yet secure!) while the others were stymied by various limitations with shared memory on Xen. These limitations are mostly fixed now, but I haven't had time to revisit the project since. Feel free to contact me for more information. If you wanted to take these up, I'd be glad to hear from you.

Prehistory

Teaching

Supervisions a.k.a. tutorials

During spring 2011 I was a tutor for the Digital Systems course in Oxford.

In Cambridge I have supervised (tutored) many systems and programming courses from the Computer Science Tripos. The list below includes both current and past courses I supervised, together with any additional materials I prepared.

Current courses

Older courses

Lectures

In April 2010 I gave a lecture to the MPhil in Advanced Computer Science class in Cambridge, as part of the Cambridge Programming Research Group mini-series within the Research Students' Lecture series. My lecture was entitled “Modularity – what, why and how”. Contact me for slides. Other lectures in the CPRG mini-series were given by Dominic Orchard, Max Bolingbroke and Robin Message.

Demonstrating

During Michaelmas 2009, in Cambridge, I demonstrated the MPhil course Building an Internet Router, run by Andrew Moore.

Projects

I'm interested in supervising bright and enthusiastic Bachelor's and Master's students for their individual projects. For ideas and to find out what I'm interested in, see my list of suggested projects. I'm also always extremely happy to talk to students who have their own ideas.

I've supervised several projects in the past. Bachelor's students at Cambridge can read my thoughts about Part II projects, see the project suggestions for 2010–11 from me and others in the NetOS group, or from the entire Lab and beyond and contact me if you're interested. If you have your own ideas which you think I might make a good supervisor for, I'm always happy to talk about those too. Previously I've been fortunate enough to work with the following final-year students:

Software

My projects

Note: as of late 2011, I have started using GitHub. Over time, code for all my larger projects will start to appear on my GitHub page. This page is likely to remain the more complete list, for the time being.

My research work involves building software. Inevitably, this software is never “finished”, rarely reaches release-worthy state, and usually rots quickly even then. So not much here is downloadable yet; this will improve over time, but in the meantime I list everything and invite you to get in touch if anything sounds interesting. My main projects, past and present, are:

Smaller contributions

I've also submitted small patches to various open-source projects including LLVM (bugfix to bitcode linker), binutils (objcopy extension), gcc (documentation fix for gcj), Knit (compile fixes), Datascript (bug fixes), DICE (bug fixes), pdfjam (support “page template” option) and Claws Mail (support cookies file in RSS plugin). Some of them have even been applied. :-)

Goodies

Apart from my main development projects, I sometimes produce scripts and other odds and ends which might be useful to other people. Where time permits, I try to package them half-decently and make them available here.

For computer science researchers

I have written some scripts which attempt to retrieve decent BibTeX for a given paper (as a PDF or PostScript file). details

For researchers in a research institution

I've written a nifty script for printing papers, which helps people save paper, share printed-out papers and discover perhaps unexpected collaborators within their institution. details

For supervisors of Tripos courses (in Cambridge)

I have a Makefile which downloads and compiles Tripos past-paper questions. It's pretty much self-documenting. Here it is.

For general audiences

I have built a sizable collection of vaguely useful shell scripts and other small fragments. One day “soon” I will get round to publishing them. The biggest chunks are my login scripts, which use Subversion to share a versioned repository of config files across all the Unix boxes that I have an account on, and the makefile and m4 templates that build this web page. I need to clean these up a bit. In the meantime, if you're interested in getting hold of them, let me know.

Thoughts

Occasionally I write down some thoughts which somehow relate to my work. They now appear in the form of a blog. Posts are categorised into research, and teaching, development, publishing and meta strands.

Personal

I have the beginnings of a personal web page. It's very sparse right now. Have a look, if you like.

Contact

OfficeFS13
E-maildeduce from firstname.lastname@cl.cam.ac.uk
PostDr Stephen Kell
Computer Laboratory, University of Cambridge
15 JJ Thomson Avenue
Cambridge, CB3 0FD
United Kingdom

Content updated at Fri 29 Apr 17:54:00 BST 2016.
validate this page