Rambles around computer science

Diverting trains of thought, wasting precious time

Tue, 08 Apr 2014

Dynamic linking and security

The thoughts in this post were provoked after reading Tim Brown's very interesting Breaking the Links article.

Dynamic linkers are notable for privilege escalation bugs. The reason is their interaction with the setuid mechanism, and indeed any mechanism that associates privileges with an executable. Unix's original model where executables are trusted in their entirety is fundamentally flawed on modern platforms that have shared libraries, where executables usually link in other code, some of which can be supplied by the user. Rather than getting rid of the now-flawed setuid mechanism, currently dynamic linkers instead impose a raft of ad-hoc restrictions, lashed together in the hope of closing off any paths by which user-supplied code can get into a setuid process. They must also balance this against another goal: to avoid the unwanted side-effect of ruling out some perfectly trustworthy compositions. Unfortunately, these ad-hoc measures invariably fail on both counts.

What does setuid mean? It means that the invoking user has access to any behaviour allowed by the setuid program, as executing with the program owner's effective uid. Attackers escalate their privileges by introducing unanticipated code which widens that set of behaviours. Can we take a better approach? One naive idea would be to construct the process as normal, and then check that it includes only trusted code; at that point, we decide whether it runs with elevated privileges or not. (A wart is that we also have to account for code loading after the start of execution; let's ignore that for now.)

Does this break anything? Certainly it will do. I might run a job that spawns a process tree in which some subprocess is setuid. Normally I can run the tree with some other library LD_PRELOADed, expecting that although my library won't get preloaded into the setuid process, that process will still run with elevated privileges. Under our proposed new model, if we do the preloading then discover that the preloaded library is not trustworthy, we will run it with lower privileges, and likely break the process tree (assuming the process really needed to be setuid).

.

This is a feature interaction, and what we need is a policy for resolving the interaction. Current Unices have the policy that “setuid clobbers LD_PRELOAD”. The alternative we just considered is that “LD_PRELOAD clobbers setuid”. Neither of these seems adequate. Perhaps instead we can evolve things towards a more subtle mechanism that can avoid the interaction in the first place, perhaps by selecting among untrusted and trusted libraries. For example, if there are multiple available versions of some library, we might use the more trustworthy one instead of the one that a fixed set of name lookup rules guides us towards. In general, we can see this as resolving ambiguity among a set of depended-on library specifications in a way that maximises value (both trust and functionality).

Doing so requires a way to designate what code is trusted, not just what executables are trusted. We also need a sense of what alternative ways there are of satisfying “the same” link requirement. I have been using the example of LD_PRELOAD so far, but on ELF platforms, link requirements (beyond the executable) are specified as either a PRELOAD or (more often) a NEEDED, a.k.a the DT_NEEDED header of ELF's .dynamic section.

To find alternative implementations of “the same” requirement, we can mine the redundancy inherent in RUNPATH, LD_LIBRARY_PATH and perhaps the multiple notions of ORIGIN that can created by hard-linking. Each of these might provide multiple candidate libraries. Setting up a fake ORIGIN is a trick familiar to crackers, but we can turn it around by enumerating all possible ORIGINs of a given shared object and considering all the libraries we find there. (Sadly this requires a scan over all directories in the filesystem, and in the common case will yield only one library. But this approach will defeat link-based attacks, since even after hard-linking, we will still find the original library, and any sensible trust heuristic will select it in preference.) The ABI tag matching (modified by LD_ASSUME_KERNEL) is another instance of how the linker will look for libraries in particular places satisfying particular properties, in a way that is currently very rigid but could be generalised into a search/optimisation problem where paths supplied by developers, administrators and users are used as hints and bootstrapping input, rather than definitive instructions.

This approach brings two further problems. Firstly, what's to prevent us from choosing a probable-looking binary that is semantically broken (with respect to our use of it)? We can argue that all binaries with the same soname should be interchangeable, but in practice there will be difficulties. And matching by soname might be too restrictive anyway. Secondly, like any search- or rule-based system, our approach has a “delocalising” effect, lessening the administrator's explicit control and making the linker's behaviour more complex to configure and debug.

Another subtlety is that trust in the part is not the same as trust in the whole. Even if we refine Unix's notion of trustedness down to libraries rather than just executables, some exploits can work by combining trusted code in untrusted ways. The case of another linker exploit, CVE-2010-3856, is one instance of this: the library is sane enough that it could easily be deemed trusted, but we can construct a very specific context in which it is anything but. (This context is: use it as a linker-auditing library to a setuid binary, causing its constructor to be run with elevated EUID, hence allowing a temporary file exploit that would not emerge in “normal” contexts where the constructor did not have elevated privileges.) This is a classic “confused deputy” situation.

Confused deputies are always a good argument for yet finer-grained models of privilege, such as capabilities. So it's not clear whether we would get much security value from search-based link-time composition, relative to plumbing a better model more deeply into our operating system.

[/research] permanent link

Mon, 13 Jan 2014

C libraries and linking

At my talk today, Simon PJ asked an interesting question which I managed to give a slightly wrong answer to. I had observed that asking my C compiler to link an object file invoked the linker with a lot of extra input files, many of which are specific to the particular C library implementation being linked to. Note the various crt*.o files in the following link command concocted by gcc. These files come from the GNU C library.

$ gcc -### -o hello hello.o 
(snip)
/usr/local/libexec/gcc/x86_64-unknown-linux-gnu/4.8.0/collect2 \
  --eh-frame-hdr \
  -m elf_x86_64 \
  -dynamic-linker /lib64/ld-linux-x86-64.so.2 \
  -o hello \
  /usr/lib/x86_64-linux-gnu/crt1.o /usr/lib/x86_64-linux-gnu/crti.o \
  /usr/local/lib/gcc/x86_64-unknown-linux-gnu/4.8.0/crtbegin.o \
  -L/usr/local/lib/gcc/x86_64-unknown-linux-gnu/4.8.0 \
  -L/usr/local/lib/gcc/x86_64-unknown-linux-gnu/4.8.0/../../../x86_64-linux-gnu \
  -L/usr/local/lib/gcc/x86_64-unknown-linux-gnu/4.8.0/../../../../lib64 -L/lib/x86_64-linux-gnu \
  -L/lib/../lib64 -L/usr/lib/x86_64-linux-gnu \
  -L/usr/local/lib/gcc/x86_64-unknown-linux-gnu/4.8.0/../../.. \
  hello.o \
  -lgcc \
  --as-needed -lgcc_s --no-as-needed \
  -lc \
  -lgcc \
  --as-needed -lgcc_s --no-as-needed \
  /usr/local/lib/gcc/x86_64-unknown-linux-gnu/4.8.0/crtend.o \
  /usr/lib/x86_64-linux-gnu/crtn.o

What does this mean if I've compiled some of my program with compiler A (from some vendor whose C library is in /usr/A/libc.a, say) and some with compiler B (from another vendor whose C library is in /usr/B/libc.a)?

It's tempting to say that C compilers are strongly coupled to their library, so we must link via some unique C compiler and use only its library. Does this preclude using another C compiler for some of our program? I answered more-or-less in the affirmative... but it's not true! There are two clear (in hindsight) bits of evidence to the contrary.

The first is that empirically, it's easy to see the same C library being used by multiple compilers. The important thing is that there's only one set of library headers. When I install clang on my Linux box, it happily uses the incumbent glibc library headers when compiling. When linking, it happily issues the right linker command to link with the glibc binaries. Indeed, it issues a very similar linker command to the one we saw earlier. We can again see the glibc-provided crt*.o objects being linked in.

$ clang -### -o hello hello.o
Ubuntu clang version 3.2-1~exp9ubuntu1 (tags/RELEASE_32/final) (based on LLVM 3.2)
Target: x86_64-pc-linux-gnu
Thread model: posix
 "/usr/bin/ld" "-z" "relro" "--hash-style=gnu" "--build-id" "--eh-frame-hdr" \
 "-m" "elf_x86_64" "-dynamic-linker" "/lib64/ld-linux-x86-64.so.2" \
 "-o" "hello" \
 "/usr/bin/../lib/gcc/x86_64-linux-gnu/4.7/../../../x86_64-linux-gnu/crt1.o" \
 "/usr/bin/../lib/gcc/x86_64-linux-gnu/4.7/../../../x86_64-linux-gnu/crti.o" \
 "/usr/bin/../lib/gcc/x86_64-linux-gnu/4.7/crtbegin.o" \
 "-L/usr/bin/../lib/gcc/x86_64-linux-gnu/4.7" \
 "-L/usr/bin/../lib/gcc/x86_64-linux-gnu/4.7/../../../x86_64-linux-gnu" \
 "-L/lib/x86_64-linux-gnu" \
 "-L/lib/../lib64" "-L/usr/lib/x86_64-linux-gnu" \
 "-L/usr/bin/../lib/gcc/x86_64-linux-gnu/4.7/../../.." "-L/lib" "-L/usr/lib" \
 "hello.o" "-lgcc" "--as-needed" "-lgcc_s" "--no-as-needed" "-lc" "-lgcc" \
 "--as-needed" "-lgcc_s" "--no-as-needed" \
 "/usr/bin/../lib/gcc/x86_64-linux-gnu/4.7/crtend.o" \
 "/usr/bin/../lib/gcc/x86_64-linux-gnu/4.7/../../../x86_64-linux-gnu/crtn.o"

But how does it know about these files? The answer is worse than I had imagined. The file lib/Driver/ToolChains.cpp in clang's codebase embodies a ton of knowledge about linking on different platforms—even down to individual GNU/Linux distributions and versions thereof. Unsurprisingly, this is fragile and has been known to spawn bugs, like this one.

The second bit of evidence is in how most C compilers let you tell them to use a foreign set of headers, which could be from any C library implementation we like. To avoid the “standard” headers in /usr/include you need to use an option like -nostdinc, and then use -I to point it at the right headers. Assuming the headers don't use any non-standard features, there's no reason why any compiler couldn't generate code using any other vendor's library headers.

Of course, “there's no reason why not” often translates to “it is unfortunately impossible”. A data point in this case is provided by uClibc, a small replacement C library, whose FAQ notes that “it is possible in some limited cases to re-use an existing glibc toolchain and subvert it into building uClibc binaries by using gcc commands such as -nostdlib and -nostdinc... [but] it proved impossible to completely subvert an existing toolchain in many cases.” I'd love to dig into the details of what didn't work, but that will have to wait for another day. (It might just amount to the same issue we've noted, i.e., making the C compiler generate the right link commands... but it wouldn't surprise me if there's more to it.)

Another case I haven't handled: what about multiple C libraries (or parts thereof) in the same process? An obvious problem is conflicts in the symbol namespace—since the ABI likely requires a unique definition of some symbols, forcing a choice between the two libraries. (Chapter 6 of the System V AMD64 psABI is in a vague state, but appears to confirm the existence of this kind of constraint.) However, with enough hammering on symbol renaming and scope-localisation, there's no reason why certain portions of two different C libraries couldn't coexist. But it seems unlikely that two implementations of any low-level parts (such as startup and shutdown, threading and signal handling) could be combined in a working fashion without being explicitly designed to do so.

In summary: any compiler “should” in principle be able to generate code targeting any C library on a given platform, but there are inevitably some warts that inhibit this mix-and-match in certain cases. Moreover, knowing how to link to a given C library requires knowledge of nasty implementation details of that C library. These details could perhaps be “promoted” to a vaguely standardised ABI detail of (in our case) the combined GNU/Linux/glibc platform, but this hasn't been done so far.

Another question that Simon asked was whether we could apply some hindsight to come up with a simpler model which avoids the hair-raising complexities that I'd talked about. One suggestion I had was a single virtual address space, which would eliminate the need for position-independent code in shared libraries (since a library's load address could be assigned at deployment time). Later, Raphaël also reminded me that Plan 9 does away with shared libraries altogether, apparently without it costing too much in memory. I'm sceptical that this wouldn't be costly on a desktop system though (think how many copies of the KDE or GNOME libraries you'd end up mapping, for example). I'm also a big fan of Multics-style orthogonal persistence, which ties in quite nicely with the SVAS approach, but is a far-reaching change. Meanwhile, I think the trade-offs surrounding large-versus-small processes and the complexity of near-versus-far addressing modes are quite difficult to avoid (without an obvious performance hit), since they come to us from the hardware. Perhaps we could use reconfigurable hardware somehow to push all that complexity out of the view of compiler writers... but I doubt any hardware designers consider this a problem worth tackling.

I'm giving a “Part 2” follow-up talk on (most likely) Monday 3rd February.

[/research] permanent link

Tue, 26 Nov 2013

(Tell me why) I don't like Java

I can just about cast my mind back to when I learnt Java. It was 2001, when I was a 17-year-old C++ programmer. (Actually the particular version of Java I first learnt was called C#, but let's not complicate things.) I remember being fairly impressed. It struck me as a C-family language that had kept enough of C++'s features to do abstraction neatly enough, modulo the occasional grumble, while making some impressive complexity savings by carefully-chosen omissions. (I knew nothing about Smalltalk at the time.)

At the time, many of these omissions were bold, and the way they worked together was, to my 17-year-old self, ingenious. Lack of subobject structure keeps garbage collection simple. In turn, garbage collection (with compaction) allows faster heap allocation, clawing back some of the cost of all these extra objects. Even doing away with templates, by relying on checked downcasts and a common supertype, had its elegance, despite now being supplanted by generics. Grumble-wise, I won't say I wasn't alarmed by the lack of multiple inheritance and operator overloading. But overall I wasn't unsympathetic to what Java was doing.

Nowadays, the mention of Java makes me groan (in my mind). It seems to be a combination of the intrinsic and extrinsic properties that makes it so. Thinking extrinsically, Java has a huge, huge mindshare. Like market share, mindshare is dangerous once one player gets hold of too much of it. Many intrinsic details of Java would be unremarkable warts in another context, but have a giant significance, because they're what a large proportion of programmers think are “the way programming is”. In reality, Java contains a selection of somewhat-arbitrary design choices made by clever but flawed human beings some time within the last couple of decades. By letting them become ingrained, we are closing people's minds and inhibiting adoption of better alternatives. What follows is a quick run-down of how I see the problems.

As a language, it's boring. This is actually one of the least bothersome things about Java, but it needs mentioning here. As a researcher, doing any work that is specific to the Java language feels like work not done properly, because it avoids a lot of potentially more interesting cases. (I'm being a little facetious here, but I hope the underlying point is visible.) Against today's competition, Java seems remarkably inexpressive as a language. C++ is cleaner than ever, Scala is achieving a C++-like complexity curve with nicer syntax and marginally less unclear semantics, and dynamic languages are faster than ever. What keeps Java going is its self-perpetuating ubiquity. It also helps that Java is in a sweet spot regarding IDE assistance: Java code has enough statically-derivable properties to do automations like refactoring and autocompletion reasonably precisely, while being simple enough—unlike C++ and, I venture, Scala—to implement them reasonably correctly without giant effort. Is this a good thing or not? It has no doubt delivered certain innovations to practitioners faster than would otherwise be possible. But right now, to me, its net effect seems to be to stifle innovation in better languages.

As a learning device, it's a dog's breakfast. This is probably my number-one beef. Java is too complex to cleanly expose concepts like procedures, structured data, or pure object-oriented programming—never mind functional styles of programming. Yet it is too semantically simplified (constrained) to reveal essential contrasts and orthogonalities, such as between by-value versus by-reference, heap versus stack, dynamic versus static type checking, and inheritance versus subtyping. Various conflations of these are the deliberate features of Java's design that make it simple (relative to C++, say). But now that Java is ubiquitous, they start to be harmful too. I concede that just because a language isn't the ideal vehicle for exploring conceptual distinctions doesn't make it worthless as a programming tool—far from it. But the effect of mindshare is huge. The concepts of programming are harder to teach in a world where we must teach Java, because it is not a vehicle for cleanly conveying any of these concepts. My impression is that today's programmers learn fewer languages than ever, so it is harder to establish contrasts. Even diverse languages are being shoehorned onto the JVM, further enshrining Java's [bytecode's] limited vocabulary as a set of fundamentals. It's like some perverse “worse is better” situation, except that the usual point in favour of the “worse” solution, namely simplicity, it not in evidence much. As I'll rant in a moment, Java is a very complex beast below the language level.

Portability is a myth. Java's portability was its key selling point, but it's not clear that it has succeeded. Although Java makes it hard to write code with nonportable semantics, that is only a small part of the portability puzzle. Libraries are a big issue. JVMs are complex enough to suffer compatibility problems among each other, too. So now we just have portabilty problems one level up. I've been sceptical about the drive for portability because in general, no two portability requirements are quite the same. Having “the same” behaviour among every program built using technology X”, which is my paraphrase of Java's portability sell, is primarily a benefit to the authors of technology X, not to the software's users, nor even to the application developers. For example, as Ian Lance Taylor pithily blogged (talking about Tk), saying that applications look the same on any platform means they look odd on every platform. Attempting to insulate the JVM and its libraries from all details of the host system has become a dogma pursued far beyond its usefulness. This theme continues....

As a deployment infrastructure, it's a mess. Again in the name of “portability”, the Java platform tries to hide all trace of the underlying system, defining its own conventions for configuration (classpath, system properties), resource limits (Java heap size), its own archive formats, its own security model, and so on. The problem is that the underlying system always has some way of doing these, and by duplicating this functionality, the end result is excess complexity and hard-to-understand interactions. Even despite massive uptake, the JVM isn't the only runtime. The approach of eliminating complexity by defining “one platform to rule them all” is a hopeless modernist ideal. In practice, it just adds one more platform to the mix, causing a strict increase in complexity. This is the key paradox of attempting to achieving simplicity through portability. It can only succeed if the portable thing completely hides the underlying complexity. This is less and less likely the higher up the stack you try it. Instead we should aim to build systems out of simple (but not necessarily portability-providing) pieces, then achieve systemic assurances by reasoning about whole compositions. Typically, the kind of “whole composition” I'm talking about would be a runtime plus an operating system plus an instruction set architecture. (Interestingly, I contend that stopping at the ISA is sensible; we don't have to go all the way down to microarchitecture. Unlike JVMs, ISAs are low-down enough that they can hide the underlying complexity fairly well.)

As a development infrastructure, it is diabolical. Debugging and dynamic analysis on the Java platform are hugely, horribly flawed. I wrote a paper about one aspect of this a while back. Primary evidence is how the leading implementation (namely Hotspot) implements various profiling and debugging tools (like hprof) using private interfaces (namely the “serviceability agent”), because the pseudo-standard interfaces aren't good enough. And that's in spite of their unfathomable complexity. These interfaces—JVMTI, JDI and friends—are not officially part of the JVM specification, and are pseudo-standards in that no two implementations are quite alike. They also offer inherently poor coverage, because two large chunks of the process's code—namely natives and the VM itself—are respectively not covered, or covered in limited fashion using a disjoint mechanism (which may or may not be implemented). As a result, if you want to do something as simple as observing all allocations of Java objects in your program, you have to do three different (and fiddly) things: bytecode instrumentation for a selection of bytecodes (creating objects and creating arrays), handle JVMTI's VMObjectAlloc callback, write JNI function inteceptors to catch creation from native code. Even then, you're a long way from understanding the memory behaviour of your program, since, surprise surprise, native allocations—including (on most VMs) those made by the VM itself—are actually very significant. There was even a paper at OOPSLA 2010 about this. There are analogous problems in other kinds of dynamic analysis in Java. Try implementing an information flow analysis at the JVM level, and you will be stymied, because information is continually flowing through native code and through the VM itself, and these are wide, complex, undocumented interfaces. (By contrast, doing this at the whole-process level requires only modelling the system call interface, which is manageably-sized and stable.) Java-specific interfaces simply cannot cover, by definition, the whole behaviour of your program.

Debugging technology has gone backwards This is a bit of a specialist rant, so forgive me. It builds on what I just wrote about dynamic analysis. The conventional approach to debugging is takes a simple and high-coverage abstraction, namely the memory image of the debugged process, as the baseline. On top of this, we selectively describe how source-level language features are realised. We does this by documenting the compiler's (and runtime's) implementation decisions, using standard (albeit quirky and complex) formats like DWARF. Apart from GNU Java, no Java implementation I know does this. Instead, they rely on Java-specific interfaces. In so doing, they take a big technological step backwards, for no better reason than expedience. I can excuse VM prototype developers for taking the short cuts of knocking up an in-process debug server with a fixed wire protocol, and building shim “tool interfaces” as thin abstractions of their internal APIs. But in a platform that is the outcome of billions-of-dollars product development, there is no such excuse. As with portability, Java adopts an approach which can only work if it owns the entire system. Since it does not, it fails to eliminate any complexity, and instead just adds more.

As a culture, it's conspicuously herd-like. I suppose this is a universal property of communities. Languages as practical tools tend to take on the properties of the individuals using them. That's why Haskell is over-clever, Python is hack-filled, C is rarely well commented and Java is bureaucratic and verbose. In each case you could say the same for these languages' advocates. My distate for Java owes partly to the fact that it is favoured by not-so-good programmers and middle managers. The effect permeates the community. Java people love reinventing things which existed before—only this time, it'd “pure Java!”. They also seems to love XML, a technology I despise. They outdid themselves by combining these two properties to spectacularly ill effect, as known to anyone who's ever edited a file called build.xml. Reading about Eclipse plugins makes me jaded. I don't know whether it's the chicken (programmers) or egg (Java language) that's to blame for all these problems. Either way, practice has forgotten far too many principles: of keeping it simple, of not duplicating mechanisms, of using information hiding to hide what is change-prone rather than just to make your code more complicated. The principles of object-orientation itself are long forgotten. To illustrate this latter point, I defer to the very nice (developer-focused) talk by Kevlin Henney titled after William Cook's careful statement that “it is possible to do object-oriented programming in Java”.

Here's an anecdote to finish. I just read a research paper containing the following two sentences.

The first kind of behaviors is interface behaviors. (Please note that the name has nothing to do with Java Interfaces.)

Sadly, I can understand why the qualification is necessary. But it really shouldn't be. If we're doing research, what we're doing should transcend the immediate technology, and we should be accustomed to a presentation which reflects that. But even in research communities, too many people equate doing X “for software” with doing X “for Java”, and this is a sad state of affairs.

[/research] permanent link

Sat, 23 Feb 2013

A curiously recurring explanation

In conversation recently(-ish), I tried to explain the infamous Curiously Recurring Template Pattern (CRTP) of C++ by relating it to mixins. That just turned one surprisingly tricky problem into two. Here I will try to rectify both problems by proivding a somewhat coherent explanation of how to realise mixins using CRTP.

/* defining a mixin */
template <class Instantiator>
class my_mixin
{
	int some_state;
public:
	void invoke_feature() { /* using some_state, somehow */ }
};

/* "mixing in" a mixin */
class built_from_mixins
 : public 
      my_mixin<built_from_mixins> /* , other mixins ... */
{
    /* ... */
};

Unlike normal base classes, a mixin is designed to attach at an arbitrary point in the subtyping hierarchy. It's an orthogonal feature that can be “mixed in” to many potential classes, rather than an increment on a particular class. Therefore, rather than referencing a specific base class, a mixin leaves this reference unbound. Nevertheless, it still gives a name to the class it is extending. Here it is called Instantiator. This can be thought of as a “forward reference” to the class that is “mixing it in”. The name is a placeholder for the class that the programmer will extend using the mixin.

(There are other variations on this theme in other mixin-like constructs. Anyone interested in the many meanings of “mixin” could do worse than to start with Richard Gabriel's essay which is based around this subject— though I note that its actual point about incommensurability is deeper, completely distinct, and very interesting!)

Looking at the code, we see there is a cyclical reference chain: from the mixin user built_from_mixins to a specialisation of the mixin itself my_mixin<built_from_mixins> and back (via Instantiator, which is instantiated to built_from_mixins). These references are a bit strange. We haven't even defined built_from_mixins at the point where we use it to parameterise our mixin instance. Why does the compiler even allow this?

The answer is that of course it's allowed, and the compiler allows it simply by virtue of the usual C++ (and C) rules about incomplete types. Cyclic reference among data type definitions is not unique to mixins. Consider a linked list, where it's no problem to create a recursive “next” pointer inside the list node type. Since pointers-to-incompletes are allowed, and the list node type is just another incomplete type at that location in the code, the code compiles fine.

It takes only a small generalisation to apply this not just to incomplete pointer target types, but more generally to incomplete types used as template parameter instances. Of course we can refer to built_from_mixins inside its own inheritance clause—but we can only do things that we can do with an incomplete type. Using it as a template parameter is one such thing—so long as the template's definition is consistent with its parameter being incomplete. In particular, possible usages of Instantiator inside my_mixin, above, are limited if we want to use the incomplete my_mixin as our Instantiator: we can only do the things we can do with any other incomplete types inside a class definition. Happily, my_mixin's definition sticks to this regime, so is well-formed. Moreover, it itself is a complete data type! (Similarly, if you defined a mutually recursive pair of data types using pointers, whichever one of them you defined first in your source file would be complete immediately, even though it contains a pointer to the yet-to-be-defined second data type.) Being complete, our instantiation of my_mixin is fair game for deriving from. This is what allows us to derive build_from_mixins from my_mixin<built_from_mixins>: the latter is complete, even though its type parameter built_from_mixins (known as Instantiator inside the mixin) isn't.

In fact, we haven't used Instantiator at all inside my_mixin. So, why include it? What can we do with it? Well, we can safely use it in any way we can use any other incomplete type: as a pointer (or reference) target type, or as a type parameter. An example is boost's enable_shared_from_this, a mixin which adds the necessary state and member functions for allowing a class to provide a shared_ptr version of its this pointer. You can't safely create a shared_ptr from a regular pointer in general because you don't know where the target object's reference count lives. The enable_shared_from_this mixin fixes this by embedding a pointer to the refcount, in the guise of a weak_ptr subobject, into the mixing-in class. The guts of enable_shared_from_this are basically as follows.

template<class T> class enable_shared_from_this
{
private:
    mutable weak_ptr<T> weak_this_;
public:
    shared_ptr<T> shared_from_this()
    {
        shared_ptr<T> p( weak_this_ );
        return p;
    }
};

Just as in our first example, we have some private state and a public interface which implement an orthogonal feature that can be “mixed in” to any class. The mixin-instantiating class T is referenced only in an incompleteness-tolerant way, to instantiate other templates and (eventually, inside the definition of weak_ptr, which is not shown) to define a pointer target type.

I've also seen CRTP described as “virtual functions without polymorphic behaviour”. What does that mean? Since our mixin has a reference to its instantiating class, it can call its methods—even though the binding to that specific class has not yet been formed. In other words, we have deferred the binding of methods—but not until run time. Rather, we have deferred them to later in compile time, when our templates are elaborated. Let's look at an example.

Let's try to get this deferred binding without using CRTP, but also without using virtual functions. Unsurprisingly, it doesn't work. The best we can do is to try non-polymorphic inheritance, by writing something like this.

class X
{
public:
	void f() { /* unimplemented */ }
	void g() { f(); }
};

class Y : public X
{
public:
	void f() { cout << "Done something"; }
};

Of course, if we call X::g(), it calls X::f() and not G::f(). Using CRTP, we can get it to call G::f() without resorting to virtual functions.

template <class Derived>
class X
{
public:
	void f() { /* unimplemented */ }
	void g() { Derived::f(); }
};
class Y : public X<Y>
{
public:
	void f() { cout << "Done something"; }
};

CRTP allows the base class to leave certain functions undefined, for definition later in many possible derived classes. The derived classes are not derived the usual way, though: they are derived using CRTP, passing the derived class back to the base class as a type parameter.

This sets up a now-familiar kind of cyclic reference: the base class refers to its (potentially many) derived classes, through its template parameter. Having this “forward” reference, down the inheritance hierarchy, as well as the usual backward reference up it, is what allows derivation-time binding. It's also limiting: we can't subsequently “re-override” Y::f(). Y's method bindings are fixed at derivation time. We have to create a new specialization of X and derive immediately from that, using some other means to get at Y's functionality if we need it.

Interestingly, note that it's okay for us to do Derived::f() in our X class template. This is surprising because at the point where we instantiate X, Derived is an incomplete type. I mentioned earlier that we were restricted in what we could do with incomplete template parameters, but in this case, here we are happily calling a method of ours. At definition time, there are at least two possibilities for the code that must be generated at the site of our call to Derived::f(), because f() might be an instance member function or a static. (It could also be a function object overloading operator().) When we instantiate X, if we read the code strictly top-to-bottom, we haven't yet got to the body of Y, so it is not yet decided whether it will define f() as a static or an instance member function. Somehow, the compiler is examining looking ahead at the definition of Y at the point where it elaborates X<Y>, even though Y cannot be complete yet (because we're in the middle of elaborating its base class). I must confess, this is where my understanding of the C++ language runs out. Thte standard contains complicated rules about name lookup in templates—principally the infamous “two-phase name lookup”. In short, “unqualified names” in templates are looked up at template definition time, whereas “qualified names” are looked up at instantiation time. Clearly our use of Derived::f() is a qualified name. No doubt there is some part of the standard detailing how the second-phase lookup is permitted (and required) to look into an incomplete type, namely Y in our example (incomplete at the time of X's instantiation), to understand the meaning of Y::f(). I haven't yet found a reference to it though. If anyone can point me to it, I'd be much obliged.

[/devel] permanent link

Thu, 10 Jan 2013

Systems versus languages

Somewhere buried within one recent magnum opus in these pages I highlighted a contrast between systems and languages. I also noted that OOPSLA conspicuously contained the word “systems” in its title. This juxtaposition probably seems incongruous to some, but it is one close to my heart. I see myself as a researcher tackling problems usually labelled as “programming languages”, but with a “systems” mindset.

Richard Gabriel's very interesting piece about incommensurability in scientific (and engineering) research, using mixins as an example, makes some remarkably similar observations. (There's also a video of a presentation from last year's ClojureWest, but I haven't yet watched it.) He distinguishes the “engineering” papers that first discussed mixins from the “scientific” papers that subsequently studied and formalised them. And, without disparaging the latter, he very much emphasises the underrated value of the former. (He also makes some very incisive observations about apparent misunderstandings and omissions in the influential Bracha & Cook paper. A major point, of course, is that they need not be misunderstandings per se—incommensurability is the posited explanation.)

It's nice to get a historical perspective on these matters, from someone like Richard Gabriel who has seen a lot of ideas come and go. In part, his appraisal of the changing role of engineering papers offers me some reassurance that I might not be crazy. In conversation with other researchers, it can be frustrating that declaring my interest in programming languages is taken so often to imply that I do theoretical work. But I am more interested in languages as systems—my interest has to do with their “useful-for” properties, not “abstractly-is” properties. (Of course, I'm glad people are working on the latter, and I try to relate their findings to what I do... while accepting that it's not what I do.) Another interesting historical tidbit is that Gabriel estimates that my kind of work—engineering approaches to programming problems, you could say—was “outlawed” from (yikes!) roughly 1990 to 2005. I suppose it's handy that I started my PhD in 2006. The feeling of “separate camps” is still very much there, of course.

[/research] permanent link

Thu, 06 Dec 2012

Bridge that gap

I don't do IDEs.

(This post isn't really about IDEs, but humour me.)

I used to do IDEs, back in my Turbo Pascal days. And I want to now: I value the things they promise. But the reality, for me, always seems infuriating and limited. They get in my way more than they help me.

One example: Eclipse. I used Eclipse for Java programming back when I was an undergraduate, and mostly bent it to my will. Since then I've done fairly little Java programming, and a combination of mind-rot (mine) and evolution (Eclipse's) has left me sufficiently unfamiliar with recent versions that I am unable to use them effectively.

I just tried again. I wanted to open a third-party Java codebase's Eclipse project that I had downloaded from that codebase's web page. I am in the Java perspective, and I click the “Project” menu, hoping it will let me “Open project”. But no! “Open project” is greyed out. Woe is me.

Greying out UI elements is a terrible, terrible practice that should never have been invented, because it doesn't tell you why something was greyed out, so leaves you clueless about how to ungrey it. But no matter. Being a researcher, two ideas occur. Both are ways in which we could, for “little cost”, add back the ability for a program to explain why a widget is greyed.

Idea one: we should really be using some modern programming technology under which the “greyedness” state is described declaratively. In that case, the program's code will contain a concise description of exactly what causes the menu item to be greyed. I could check out the code to understand why it was greyed. Or perhaps this condition could be reflectively exported to the user. Either way, if only we had written our UI in some declarative style! Then our problem would be easily solved. But, alas, we haven't written it like that. Instead, greyedness is the emergence of some maze of twisty little imperative procedures.

Idea two, luckily, is more immediately applicable. Let's “shadow” each UI element with some extra information to do with its greyedness or otherwise. When we set its state to greyed, we snapshot some property of the program state, like the address of the caller who is turning on greying, or the whole callchain, or whatever. Then I can somehow query this shadow—maybe by attaching a debugger to Eclipse, or whatever—and have it tell me why it was greyed.

A thought occurs. Is this a general-purpose approach to “programs that explain themselves”? (Hat-tip: “Functional programs that explain their work”, by Perera, Acar, Cheney and Levy, seeded that phrase in my mind, although the general idea has been with me for much longer.) Interactively querying for explanations of program state or program output seems like a general, powerful tool, both for programmers programming and users UI-using.

Aha! you might say: this could work for a Dr Expert Developer , but there's a problem for poor Joe User. The “explanations” will be in terms of program objects, not “something the user understands”. Therefore, you would argue, my approach is not very useful except to niche-case users like me. But to this, my rejoinder is: why shouldn't they understand? If an object-oriented program is properly abstracted, it will have a “surface” level of objects which model the domain as the user sees it. Dialogs and buttons and widgets and text entry areas are all just objects, and users understand these more-or-less fine (accepting caveats from the HCI audience).

It seems to me that paving this continuum, between user-facing and program-level abstractions, is one of the great promises of object-oriented programming. I wouldn't say it's fulfilled, but then, I wouldn't say we program in a terribly object-oriented way most of the time. When I was quite a bit younger, I found it odd that Alan Kay would simultaneously have been working on user interfaces and on object-oriented programming. More recently I think I have begun to grok this connection. My latest Eclipse problem is yet more evidence of why it's a useful one.

This connection is also one which functional programmers of the Lisp-y, Scheme-y schools understand. The abstractive power of these languages is supposed to be used—I've heard it said, at least—to construct domain-specific abstractions which naturally model the elements of the program's work (its “objects”, you could say, except that here we won't). In this way, the program itself is lifted up to a level of abstraction which the user, by definition, would understand. (Emacs users might be able to tell me how well this can work out... anyone? Oh, all right.) I lean more towards the object abstraction than the lambda, but perhaps it's six versus half-a-dozen.

Perhaps disappointingly, that's mostly the end of this post. But I can't resist appending a rant about my own work. What I'm convinced of is that the object abstraction is everywhere... it's just latent a lot of the time. It's in C programs, functional programs, windowing systems, filesystems, OS kernels, spreadsheets, web browsers, IDEs. It's latent in any chunk of state at all. Yet the promise of seamlessly bridging that gap—of constructing a continuum between the programmatic and the user-facing—is not yet with us. That's because there are hundreds of different ways in which this state has been constructed and encoded, and no infrastructure unifies them. Classically, we can only shrug and say that people weren't enlightened enough to write their code the “right way”, using the right languages, targeting the right runtime infrastructure, and so on. But more likely, those “right” things were never fully realised, and never will be. Either way, what we need is a postmodern object-oriented runtime: one that can find the object abstraction, and present it to us where it exists, even if that presentation is something of an illusion—an adaptation, you could say—of reality. (This concept is also similar to views in a relational database.)

What would such a runtime look like? What's a precise definition of the problem it's solving, even? Both of these are hard questions to which I have no complete answer. But it's no coincidence my PhD was about adaptation (although I'm not saying you should read it). And my more recent side project on DwarfPython (that I would love to pursue, if only I could find a way of getting paid for doing so) is also tackling a subset of the same problem space. Instead of a language implementation owning its object representation, can we build one which gives up that ownership, but rather takes on the job of “seeing” arbitrary chunks of state as objects it can manipulate? The idea of DwarfPython is to use object producers' debugging information to do just that. More generally, in-memory state is not the only kind of object; I also have an interest in unifying files with objects. Again, rather than the classical approach of persistent programming languages, we can take a more postmodern approach in which each file has the freedom to represent its contents, or “object state”, in a different way, or a variety of ways, subject to appropriate interpretation. This is thematically similar to the work of my Bachelor's dissertation; although that pursued a far too classical approach, it was still trying to unify files with objects. So, finding the object abstraction in unusual places seems to have been a theme of my work from day −1, even if I didn't realise it at the time....

[/research] permanent link

Mon, 03 Dec 2012

Tools or not tools

Jim Coplien's keynote at SPLASH this year was a peculiar one. It featured two particularly provocative sentiments: firstly that too much abstraction is a bad thing, and secondly that building tools is not what we want to be doing. (The latter was actually due to Dave Ungar, during questions, but met with vociferous agreement from the speaker.)

These remarks met with some puzzlement from much of the audience, judging by a series of apparently disparaging remarks during subsequent conference talks and question sessions. Who could disapprove of abstraction or tools? I think there are some reasonable answers to that question; what follows is my attempt. (I have no idea whether it coincides with Coplien's or Ungar's.)

Abstraction all the way up

The abstraction issue is perhaps not terribly controversial. Most programmers are aware that abstractions present a trade-off. The temptation to abstract endlessly can be a rat-hole that distracts from actual progress on the task at hand. Ian Lance Taylor once blogged a fairly similar opinion. If you abstract too far, you abstract away essential features of the domain, rendering it unrecognisable. This yields “abstractions” that are actually complex, not simple, to use. Abstracting over multiple use cases, i.e. generality, is a common offender here. For example, rolling your own implementation of a graph algorithm can be easier than figuring out how to tame the monstrous generality of something like the Boost graph library. (Pardon my C++; no doubt you can think of your own examples.)

Sometimes, abstractions exploit specificity, by packaging up common case usage patterns. This can be very useful. In fact, in interesting counterpoint to the Taylor piece above was Rustan Leino's note about loop structures in a subsequent SPLASH keynote: inferring loop invariants is one of the hard problems faced by any verifier. By constraining the form of a loop, it becomes easier to find its invariant. Abstract loops are an extreme case of this, since the loop itself is in library code and not in user code, so the invariant need be found only once. But of course, just as Taylor hinted at, any user forcing themselves only to use such loops will end up spending rather a lot of time structuring their code to accommodate this constraint. (In this way, it shares a lot with other syntax-directed reasoning tools, including type checkers. These tools are superficially easy to market—hey, look, it shows you bugs in your code. But there is a hidden cost to using them, deriving from implicit constraints on how you can structure your code such that it interacts nicely with the tool. If you don't stick to these, your tool fails in some way, like false-positive type errors or solver timeouts.)

To end my rants about abstraction on a complaint, I could also roll out one of my previously-blogged complaints about common styles of functional programming—with liberal use of polymorphism, or unification of higher-order with “ordinary” operations (think currying, or juxtaposition-is-application), code can become needlessly hard to read. Lazy languages add the unification of storage with computation, which I concede is sometimes an incredibly useful abstraction, but easily makes the memory behaviour of your program incredibly difficult to understand.

What about tools?

For me, the most interesting issue concerns tools. Dave Ungar phrased it something like as follows: “if every bolt under my car had a little handle on it, I wouldn't need to get out to go and get a wrench”. So, let me frame the contrast I believe he was making as one of tools versus run-time systems. Dynamic O-O environments are very much systems, geared around the ability to push new capabilities down into the system's fabric, rather than having them sit on top. This “fabric” is what emerges from combining the messaging metaphor (messages are fundamentally proxyable) with dynamic extensibility (adding new messaging behaviour is a local change during runtime, not a far-reaching change at compile time). As I have rambled about previously, the lower some functionality is integrated into a system, the more pervasively available it is, so the more power and leverage it confers. Smalltalkers and other dynamic language advocates know this. It's a very tricky thing to convey to the unfamiliar. It's even harder to measure. Most of us don't use runtimes that have this amount of dynamism and immediacy, although Javascript may yet change that. Operating systems, not least Unix, are also dynamic runtimes in this way, although their inability to see inside application means (unfortunately, and avoidably) that a large amount of useful code and data (hence potential “extension”) is opaque to them.

Tools are fragmentary; runtimes are integrating One reason people develop tools and not runtime extensions is that integration is hard. If you write a command-line tool, you get to define its input domain, output domain and behaviour from a clean slate, according to your convenience. This is often (though not always) easier than plumbing something into a runtime, which is a bunch of code somebody else wrote. But let's imagine making the leap. To get slightly more concrete, suppose the “tool” we're interested in is a dynamic analysis tool—pick your favourite bug-finding, race detection, memory leak detection or other tool that fits the general mould. What's better about having it as a “runtime” rather than just a “tool”? Well, its functionality would be embedded right there in your running program. As a consequence, it supports exploratory, interactive, programmatic use. If you dropped to a REPL in your program, the innards of the tool would be laid out across your program state, pushed into fields on program objects. If your tool is a race detector using a lock-set algorithm, for example, then each object's lock-set would be accessible as a field on that object. If you're using timestamps or vector clocks, they would be there too. You're also not stuck with a fixed amount of insight the tool's authors saw fit to provide (e.g. when trying to track down the source of a data race); the tool's code is a service you're free to extend. Getting interactive, exploratory, programmatic usage seems like a useful payoff for the effort of integrating your tool into a runtime. Arguably, then, the challenge is building runtime infrastructures that are not unduly difficult to extend like this.

Progress? Is the “tools not runtimes” tendency getting stronger? “Systems, languages, applications” is the conference title's invariant. “Tools” is nowhere to be found. My vague impression is that today's programming research is more tool-focused, and less system-focused, than 20--30 years ago. (A near-dual property is also true: self-proclaimed “systems” research has less programming focus than it used to. I used to bemoan this frequently while doing my PhD in the systems research group in Cambridge.) But why? Simplistically, we might just say that integration is hard. I think there is something more subtle at work. Integration of research techniques into runtimes arguably scales poorly—since we all have to integrate into the same runtime, we have to achieve consensus on that runtime's interfaces. Tools, being freestanding and piecemeal, arguably scale better. You could say that lots of small, freestanding tools are the postmodern way, whereas “one true runtime system” is a classical ideal. (It's fitting that Noble and Biddle's “Notes on Postmodern Programming” was recognised at SPLASH this year for its influence, in the Onward! strand.)

Avoiding classical fallacy In the battle of the classical versus the postmodern, normally I side with the postmodern. How can we square this with the desire for the benefits of the runtime approach as I described it? I should probably save my thoughts for another post. But two ideas come to mind. The first is one I've already mentioned: design a runtime infrastructure that is specifically easy to extend. But that seems to be begging the question: if we knew how to build this magical runtime, and what kinds of extension it would need to support, we'd already have done it and solved the problem ages ago. For this reason, we also need the second idea: we need to get into the mindset of tolerating a lot more heterogeneity. Very briefly, it means pushing radically downwards our notion of “runtime” so that most of the typical implementation decisions of an object-oriented runtime, such as dispatch mechanisms, introspection mechanisms and object layout, are actually user-level decisions in such a system, but still recognisable as the abstractions they represent. In other words, our system can descriptively extract extract latent object abstractions from the contexts in which they emerge in existing systems, given descriptions of these latent abstractions. This contrasts with traditional runtimes, in which the object abstraction constructed by the runtime implementor in a way that is prescriptive. And hey presto, we are back to my VMIL 2011 workshop paper: we already have a very powerful descriptive technology, in the form of debugging infrastructure for native code; our task is to bend it to this new purpose. So, end of rant for today.

[/research] permanent link

Wed, 27 Jun 2012

32 bits should be enough for anyone

For a brief while, 32-bit Intel machines were the de facto standard in commodity hardware , and life was simple. Certainly, it's an ugly architecture, gigantically overcomplicated by backwards-compatibility. Its virtual addressing features are terrifying. But the subset of it which user-level programmers on modern OSes use is fairly comprehensible. There is an ISA-defined stack with its own registers and a well-defined calling convention. Pointers and most integers are both 32 bits, meaning that the “word” is a useful and well-understood unit of storage.

All this changed in the early 2000s as AMD's 64-bit extension of the ISA came into popularity. Upon us were forced bigger integers, bigger pointers, and a more complicated stack and calling convention (in the name of “performance”, but at huge cost in complexity). I believe these were completely retrograde steps. Now that pointers are 64 bits, our software's memory footprint and disk footprint are bloated considerably. To “alleviate” this, and to avoid certain paradoxes about the size relationships between short, int and long, an int in most C compilers stayed at 32 bits. Unfortunately, this is completely braindead, because int is very much supposed to be a word-sized type. This is the reason that C's “defaults to int” semantics, as applying to unprototyped functions and untyped variables, are sane.

Does this matter? Yes! Here is some code that was mysteriously segfaulting for me this morning. It's from DTrace, or more specifically, Paul Fox's Linux port of it.

if ((P->rap = rd_new(P, P->pid)) != NULL)
  (void) rd_loadobj_iter(P->rap, map_iter, P);

Somehow, the pointer returned by the rd_new---which just wraps a simple calloc() call---gets corrupted immediately after returning. Suspiciously, said corruption is that the top four bytes are 0xffffffff, whereas the lower four bytes are those of the pointer returned by calloc(). Inspecting the disassembly around the rd_new call, we see something suspicious.

   0x000000000047ed12 <+150>:   callq  0x462bc6 <rd_new>
=> 0x000000000047ed17 <+155>:   cltq   

What's this cltq thing? Well, it takes the lower 32 bits of %rax (the 64-bit register holding the return value from rd_new()) and sign-extends them to fill the full 64 bits. This is exactly the corruption I was seeing. Why did the compiler insert this unwanted instruction? The answer is revealed if we recompile the file with -Wall.

Psymtab.c:645:3: warning: implicit declaration of function `rd_new' [-Wimplicit-function-declaration]

The function is not prototyped, so its return value defaults to int. But because int is now 32 bits wide, and the register holding the return value is 64 bits wide, the compiler helpfully obliterates the top 32 bits of the return value for us by sign-extending the lower 32 bits into them. If the compiler implementors had stuck with the intention of the int data type, that it be exactly a word in size, and therefore that “defaults to int” is sensible, this would not have arisen.

Now, this is clearly sloppy code. We should just fix it so that rd_new() is prototyped. It probably seems a bit of a nonsequitur that I am blaming this problem on 64-bit architectures. But on the other hand, how often in your code have you actually wanted integers that can store values in excess of 232? If you are a systems programmer, you might value the ability to encode large offsets. But we already had long long for this. In other cases, the vast bulk of our code deals with small integers, characters and pointers. Giving us an extra 32 bits of width in our ALU operations is a waste of transistors.

Why did we waste them this way? Well, we had to waste them somehow. In the early 2000s, we didn't really know what else to do with them, because (I suspect) there was little perceived demand for multiple cores in the commodity market (outside of servers). Nowadays, we have even more transistors, and even hardware guys realise that giving us 128-bit architectures would be a pointless waste. So, they spent some effort convincing us that we really did want multiple cores after all. And now we are busy complicating our software so that we can “exploit” these too. I have ranted before about how that risks generating a generation's worth of bad software. Perhaps I should say “another generation's worth”.

By the way, I'll concede that 64-bit address spaces can be useful, if they are used to support persistence or sharing. No need for pointer swizzling! But AMD's 64-bit x86 extensions do not provide the separation of protection from mapping to realise the sharing use-case. In other words, switching protection domains still means invalidating the TLB entries of shared mappings. Meanwhile, I haven't seen anyone using the extra address space for accessing persistent storage in a radically new way, although I'd love to see some approaches in this space.

I don't completely doubt the value of multiple cores either. The right way to see parallelism is as an enabler for radically more computation-intensive applications---likely to be in domains such as scientific computation, machine learning, and simulation---than what we can currently support. As I have also ranted about before, I am deeply disturbed by the fervour for mass rewriting of everyday software, and the disruption to the infrastructure it runs on, that is resulting from mindless multicore mania, in the same way that the 64-bit shift has disrupted our infrastructure. It's all in the name of performance, but it costs us far more of human beings' time and energy than it saves.

[/devel] permanent link

Fri, 01 Jun 2012

Metacircularity (or: “what's Java got to do with it?”)

Before I begin, here's an executive summary. Metacircularity is not about self-interpretation at all. Rather, it's an engineering approach to re-using as much code as possible between different parts of a toolchain (including compiler, runtime and debugger). This is noble, but limiting ourselves to working in a single language is needlessly restrictive. If we get over our presumptions about“language barriers” (cf. Oracle's disappointing attempt at explaining metacircularity), we can apply the same re-use philosophy to supporting a diversity of languages, not just one.

I've recently found myself wanting to understand the concept and purpose of metacircularity in language implementations. This is because I've become interested in understanding the Maxine JVM, which is often described as having a metacircular design.

All this meant to me at the time was that it's written in Java---making it a self-interpreter, certainly. But is it meta-circular? What does that mean? Why might it be a good thing? As I will show, if we're even just a tiny bit pedantic (which I hope we are), then metacircularity is not really a special case of self-interpretation, but a completely separate concept.

I've found two very helpful papers describing metacircularity. The first is that of Chiba, Kiczales and Lamping talking about the “meta-helix”. The other is Ungar, Spitz and Ausch describing the Klein VM, a metacircular implementation of Self. The first paper really emphasises the idea of implementation by extension which is at the heart of metacircularity. They note that use of metacircularity [is] “to allow extensions to be implemented in terms of the original non-extended functionality”. As the paper goes on to discuss, there is a tricky bootstrapping problem inherent in this. If we don't keep careful track of the dependencies between all these extensions, subtle and not-so-subtle bugs, most obviously infinite recursions, can arise. The paper is about avoiding confusion of metalevels, and as they propose, the shape of a helix, not a circle, makes much more sense in describing what supposedly meta-circular systems are actually doing.

The second paper, by Ungar et al, is more of a practitioners' view: it shows what VM builders consider to be a metacircular design, and what they hope to achieve by it. After reading these two papers, reading some other things, and scratching my head a lot, it became apparent the primary goal of metacircularity is to solve two very practical engineering problems concerning re-use: re-use of code and re-use of debugging tools. They mention the code re-use issue directly, by saying that in traditional designs “an operation such as array access must be implemented once for the runtime, once for the simple compiler, and once for the optimizing compiler”. The question of tool support is also an explicit motivation in the same work: they lament that in the non-metacircular Self VM, “to inspect a Self object... [in] an application that has crashed the VM, one must invoke a print routine in the VM being debugged, [an approach of] dubious integrity”, that the VM “must be able to parse both Self and C++ stack frames”.

So, perhaps prematurely, I'd like to propose a new characterisation of metacircular VMs that I think captures their true nature. Metacircular VMs are interpreters (in the general sense) that are carefully constructed to have a dense re-use structure. They do this by expressing as much as possible---front-end, compiler, runtime---as extensions over a small common core. This core has some interesting properties: it is designed explicitly for extension, and includes the necessary foundations of debugger support. It is the former which allows code re-use, and the latter which enables the same tools to see all the way up the stack, from various levels of VM code up to application-level code.

Note that this is fundamentally different from a self-hosted compiler. Under self-hosting, the compiling compiler is not re-used at all in the compiled compiler. It is just used to implement a translation function, from end to end; how it does it is completely opaque. By contrast, in a metacircular VM, you can invoke your hosting runtime to perform part of your work---asking it do “do what you would do in case X” by calling the corresponding function (or sending the corresponding message, for Smalltalkers). The trick is to ensure that these helper requests are correct and well-defined, meaning they do not cause infinite regress (the obvious bug) and do not confuse meta-levels (the more subtle bugs mentioned by Chiba et al).

As a consequence of this fundamental difference, the key challenge of metacircularity is not just that of “implementing language X in language X” it's dealing with the bootstrapping problem. What is a suitable common core? How can we make it small? What extensions must it permit, or may it permit? How can we structure the extensions on top of one another, so that they can express what we want, re-using what we want to re-use, and efficiently?

So, we've established that “self-interpretation” is irrelevance. But it seems that most metacircular designs are, in fact, self interpreters, right? I actually consider this to be false. Even when staying within “one language”, the fundamentals of the bootstrapping process means that at a given level in the interpreter, certain language features may only be used in restricted ways. Sometimes these restricted subsets are given a name, like “RPython” in the PyPy project. In other cases, they are not named. But in all cases, there are restrictions on what functionality at some level in the system it is is safe and meaningful to invoke at the metalevel. Indeed, this is exactly the “helix” shape that Chiba et al were describing. In other words, different parts of the interpreter are written in different sub-languages, precisely in order to avoid infinite regress. Just because there is continuity between the core language and the eventual top-level language doesn't make them “the same”, and for this reason, metacircular VM designs are not self-interpreters.

If I were to write a ranty summary of the above paragraphs, it would be that the apparently “beautiful”, head-twisting, recursive, quasi-mathematical aspects of the metacircular design---the things which language nerds get excited about---are both irrelevant and illusory. Metacircularity is motivated by engineering pragmatics, not “deep” linguistical or mathematical concepts. (Homoiconicity, itself a concept of overrated interest, is an orthogonal concept to metacircularity, despite what at least one blogger has written.) I believe this fixation with superficial observations about language stems from the documented inability of many programmers to divorce concepts from language. (For “documented”, I can say at least that Lamport has commented on the problem, and in this case I agree with him. I have big disagreements with other parts of the same article though. I will post about those in the near future.)

So, having stated that re-use is the good thing about metacircularity, why is re-use so much easier in a metacircular design? The reason is that we have a common core providing a coherent and adequate set of services---the services embodied in the “bootstrap image”. And I say “services” and not “language” for a reason. The core really is a set of runtime services. As I have explained, is only a distant relation of whatever high-level language the author is intending to realise. In our current technology, re-using code and re-using tools across languages is hard, and so “build everything in the same language!” seems like a useful answer to a VM author's problems of API-level interoperation and of tool support. Metacircular designs are the result (because it's the closest you can get to doing everything in one language). But as I've just described, the “same language” property is an illusion, and there are inevitably many languages involved. It just happens that in current projects, those languages are designed to be as similar as possible to one another---featurewise increments, in effect. But instead of this unimaginative perspective, anyone building a metacircular VM should ask themselves: how can I design my core services---the core of the VM---to support as many different languages as possible?

This will sound familiar to anyone (so, hmm, maybe ten people on the planet) who has read my “Virtual Machines Should Be Invisible” paper. Although it doesn't approach the problem from a metacircularity perspective, this paper is all about building an infrastructure that can support a diverse variety of languages, sharing code and tools between all of them.

Currently, our shared base infrastructure is a POSIX-like operating system. Every VM author (even those interested in Windows, which I'm cool with) implicitly targets this infrastructure. Unfortunately, these systems don't provide enough abstractions. As such, different language implementors build their own infrastructure which reinvents similar abstractions in incompatible ways---including functions, objects, garbage collected storage, run-time self description, exceptions, closures, continuations, and so on. We can clearly avoid this pointless diversity without sacrificing innovation. Just as with operating system interfaces, there is never complete quiescence or consensus, but we still manage to share a lot more software between OSes than we did in the pre-Unix or pre-POSIX days.

One of the mitagating techniques which my VMIL paper describes but which metacircular designs don't use is: describe your implementation decisions. Don't encapsulate them! If you implement a certain language feature a certain way, describe it. There is nothing fragile about this, because your descriptions will be written in a standard way and consumed by an automated interpreter---called a debugger. This is what native debugging infrastructure does. VM-hosted debuggers, of the Java or Smalltalk flavours, don't do this. To make the value of this approach clear, let me finish with another example from the Ungar paper, where they proudly state that Klein VMs can be debugged remotely, and in a post-mortem fashion, using another Klein or Self VM. “A separate, possibly remote, Self VM hosts an environment that manifests the innards of the Klein VM at the source level. Thanks to Klein's metacircularity and Self's mirror-based reflection model, Klein can reuse a vast amount of already-written Self programming environment code.”

What the authors are forgetting here is that this is not a new facility. Native debuggers have long had the capacity to inspect remote processes. Smalltalk-, Self-, and Java-like designs took a retrograde step by forcing debugging to exploit the help of a server within the VM. Although this has the benefit of allowing the debugger implementation to share the introspection services already present inside the VM, it requires a core of the VM to remain working correctly, even after a failure, which precludes many cases of post-mortem debugging. By contrast, trusty (or crusty? your choice) old native debugging is necessarily designed for this as a common use-case.

The approach I advance instead, as described in the VMIL paper, is to implement introspection on the same infrastructure that supports remote inspection---which happens to be the DWARF infrastructure, in the case of DwarfPython and modern Unix-based compiler toolchains. This is very similar to the Klein approach, in which mirror objects may reflect both local and remote state. But it completely avoids the myth that we should implement everything in a single language. Indeed, debugging information formats like DWARF are actively concerned with supporting a wide variety of languages. One Klein process can inspect another because they share a set of implementation decisions. By contrast, a native debugger need share nothing with its debuggee, because native debugging infrastructure includes a facility which is fundamentally omitted from VM-hosted debuggers: the language implementation explicitly describes its own implementation decisions. It does this down to the machine level, and moreover, up from the machine level.

The result is that given a memory image of a crashed program, we can recover a source-level view of its state at the time of a crash. VM-hosted debuggers are fine for user code because encapsulation and memory-safety protect enough of the VM implementation that the debug server can still work. (Notice I don't say “type-safety”! Type-safety is just an enforcement mechanism for encapsulation, not the key property that ensures encapsulated state is not corrupted.) These VM-level guarantees do not have such a nice property if the failure was due to a bug in the VM itself. This is because the invariants of the VM's own data structures are by definition broken in this case. Some might argue that this is a minority use case, so VM-hosted debugging is fine for general use. Personally I don't mind, as long as I have a debugger that can see all the way down. Currently this doesn't include any VM-hosted debugger, but perhaps it could do. (One of my perhaps-future small projects is to create an implementation of JDWP that knows how to answer queries about native code.)

In summary, I think of the solution to re-use proposed by metacircular designs as a degenerate case of the approach I am pursuing. It sounds strange to most people, but it is not too much to ask for a language-agnostic runtime infrastructure that supports a plurality of language implementations (going right down to native code), direct sharing of code and data, and orthogonality of tool support from language. As I ranted about in the VMIL paper, this infrastructure is a modest and very feasible generalisation of what already exists, with basically only performance questions outstanding. (I'm working on it.) Given this infrastructure, the same careful bootstrapping approach can be used to share code and retain tool support throughout higher-level language implementations. But we can do this without the requirement that everything be in a single language, which doesn't make sense anyway.

[/research] permanent link

Link order

Initialization order of static state is a thorny problem. It's particularly tricky to get right portably. But until recently I didn't realise how tricky it could be even when restricting oneself to GNU tools on Unix platforms. Consider the following three-part program, consisting of an executable prog and two shared libraries lib1 and lib2. The dependency order is left-to-right in that list: prog depends on lib1 which depends on lib2.

/* prog.c */
#include <stdio.h>

/* from lib1 */
void greeting(void);

/* constructor */ 
static void init(void) __attribute__((constructor));
static void init(void)
{
	fprintf(stderr, "Initializing prog\n");
}

int main(void)
{
	greeting();
	return 0;
}

/* end prog.c */

/* lib1.c */
#include <stdio.h>

/* from lib2 */
void hello(void);

/* constructor */ 
static void init(void) __attribute__((constructor));
static void init(void)
{
	fprintf(stderr, "Initializing lib1\n");
}

void greeting(void)
{
	hello();
}

/* end lib1.c */

/* lib2.c */
#include <stdio.h>

/* constructor */ 
static void init(void) __attribute__((constructor));
static void init(void)
{
	fprintf(stderr, "Initializing lib2\n");
}

void hello(void)
{
	printf("Hello, world!\n");
}

/* end lib2.c */

Here's a GNU Makefile to tie it all together.

CFLAGS := -g -fPIC
LDFLAGS := -L$$(pwd) -Wl,-R$$(pwd)
LDLIBS := -l1 -l2

default: lib1.so lib2.so prog

%.so: %.c
	$(CC) $(CFLAGS) -shared -o "$@" "$<"

clean:
	rm -f lib1.so lib2.so prog

Now when you do make (or gmake) it will build a program that initializes its libraries in right-to-left order: from the “most depended on” to the “least depended on”. We can verify this by running the program.

$ ./prog
Initializing lib2
Initializing lib1
Initializing prog
Hello, world!

Moreover, if you try flipping around the link order in the LDLIBS line, the link will fail with undefined reference to `hello', because the reference to hello (in lib1) is introduced after the reference to lib2, and the linker's defined behaviour is to avoid re-scanning for new undefined references---it's up to the invoker to order the libraries so that this works.

Let's try this on a BSD system. I have a NetBSD 5.0 VM hanging around, so I'll try that. It has recent GNU make, GCC and GNU binutils installed.

$ gmake
cc -g -fPIC -shared -o "lib1.so" "lib1.c"
cc -g -fPIC -shared -o "lib2.so" "lib2.c"
cc -g -fPIC  -L$(pwd) -Wl,-R$(pwd)  prog.c  -l1 -l2 -o prog
$ ./prog
Initializing lib1
Initializing lib2
Initializing prog
Hello, world!

Strangely, our initialization order is flipped. This doesn't matter for our program, but if lib1 consumed some static state in lib2, it would matter quite a bit. What happens if we flip the link order around to compensate? We edit the LDLIBS line and re-make.

$ nano Makefile
$ gmake clean && gmake
rm -f lib1.so lib2.so prog
cc -g -fPIC -shared -o "lib1.so" "lib1.c"
cc -g -fPIC -shared -o "lib2.so" "lib2.c"
cc -g -fPIC  -L$(pwd) -Wl,-R$(pwd)  prog.c  -l2 -l1 -o prog
$ ./prog
Initializing lib2
Initializing lib1
Initializing prog
Hello, world!

This has done what we want. But what's going on? This link order didn't even work on GNU/Linux. Not only does it work on BSD, but it's required if we want a sensible initialization order. Our initializers run in left-to-right order, so we need to put the “most depended on” libraries first, not last. This isn't a BSD quirk per se, because we're using the GNU linker in both cases. I suspect the linker scripts are nevertheless different in the two cases. However, I haven't had time to look into the details of why. I'd be interested to hear, if anyone knows. I guess this is the sort of pecularity that gives libtool a reason to exist.

[/devel] permanent link

Tue, 20 Dec 2011

Cathedrals, bazaars and research groups

[Post-hoc clarification: at the time I wrote this rather grumbly post, I was working in the Department of Computer Science at the University of Oxford. It doesn't necessarily reflect any on other institution whose domain you might currently be seeing in your address bar!]

A fe months ago I finally got around to watching the video of Guy Steele's “Growing a Language” talk from OOPSLA '98. It's a spectacularly entertaining and insightful talk.

(It's also a nice demo of how a good keynote doesn't have to be Earth-shattering, as long as it's compelling in concept and delivery. Really, the meat of the talk is quite specific: it's about how language evolution should be managed, with particular reference to the then-ongoing attempts to add two features to Java: generic data types, which we all know and love, and operator overloading, which still hasn't made it.)

It was a nice reminder of the two “organisational modes” of collaborative effort that Eric Raymond called The Cathedral and the Bazaar. Building software is one activity where these metaphors apply. Designing languages is another. Research groups are a third.

Like language design and the construction of any large software project (think Linux), research groups aren't a “fully collaborative” activity. Rather, they are “partially collaborative”---it's not that everyone is working with everyone else, but rather, different participants are interested in different pieces of the overall puzzle. There will always be multiple frontiers of progress open concurrently---but all building on a shared central core.

When I was in Cambridge, the group I was in was very much a bazaar in style. There was no unique leader (but rather a gaggle of four or five faculty). Group communications revolve around a mailing list and weekly meetings where discussion was open, informal talks were and anyone would be free to raise questions big and small.

It wasn't a problem-free group, either in general or for me personally. For my first year in the group, the bazaar was dead. That was a tough time---mainly because communication structures reverted to small cathedrals (and I wasn't really a part of any of them). Even later on, I must admit I didn't always feel completely at home. I was a programmer-oriented researcher in a performance- and applications-oriented group. But in hindsight I appreciate that the group's bazaar-like communication structure and ethos were a very good fit for me, even if the topic selection wasn't great. By the end of my PhD, I found I was getting some reward from my participation in the group. in two ways. For one, my work had gained some degree of recognition in the wider group---I felt I had, in my own small way, “shaped the agenda” at least in a tiny corner. (Sadly this was not enough to get others on board with similar work, but also not miles away from that either.) For another, I had participated in the more topic-independent community aspects of a research group---organising the talks for a while, participating in discussions at talks and on the mailing list, being around, organising events, and so on.

I was recently lamenting to myself---a favourite pastime of mine---how right now, my work isn't a “part of” anything. Nobody cares about what I'm doing, or so it seems, and conversely, I find it hard to get enthused about what those around me seem to be doing. But then again, I have very little idea of what their work is, nor they of mine. There is a lack of transparency and a consequent lack of spontaneity. Without cross-linking communication structures, there just aren't the opportunities to spot synergies and find common interests. I have found this a bewilderingly opaque and unsatisfying environment almost since I arrived, but I only recently realised the reason: that it is a severely cathedral-organised group. There is no institutionalised process for cross-talk (such as frequent group meetings or mailing list), and while there are multiple frontiers open, each is coordinated from the top. This clearly works for a lot of people, but not for me. Does that say anything about the kind of researcher I am, or others are?

As an addendum: it's worth briefly mentioning the “agile research groups” idea, one example of which is Scram of Mike Hicks and Jeff Foster. Eric Eide also mentioned to me he uses some of these ideas, to varying degrees of success, in the Flux group at Utah. Coincidentally, I recently dropped in on both these groups! I think these techniques are mostly orthogonal to the cathedral-versus-bazaar issue: they concern the manner (frequency, duration) of communications, not the topology. I expect Scram works best when participants have a common goal, i.e. there may also be tighter topic-coherence requirements on its suitability. These may perhaps even be more likely to hold in a cathedral-style group, although there is certainly no hard-and-fast causal relationship there.

[/research] permanent link

Wed, 14 Dec 2011

Heterogeneity or homogeneity: what's the problem?

My attention was recently drawn to a problem that some web developers call the “language heterogeneity problem”. I'm not sure where the term comes from; in fact it is not as widely used as I was led to believe. But still, most people who have done web programming know that there are a lot of languages that people use for the web, not usually out of choice per se, and that this is somehow a problem.

The phrase “language heterogeneity problem” immediately jarred with me, since some of my work has been looking at heterogeneity of language as a goal, not a problem. Surely, we want to choose the best language for each part of our program, and not pay any unnecessary cost when doing so? Of course, the problem is about choice versus imposition. It's not that the ability to use multiple languages is bad. It's that in any given context, we don't have that ability! Consequently, you're forced to use particular languages for a given piece of code. This is the true “heterogeneity problem”. I'd couch the problem as lots of small homogeneity problems, not one big heterogeneity problem.

One of my recent student project ideas, sadly not yet attempted (or indeed advertised), is to develop a compiler back-end and run-time library that would let us compile vanilla C programs into web applications. So, for example, if I do printf() it will write some text to the page, and if I do fgets(..., stdin), it will generate a form field whose submission action is to activate the continuation of the program. There are some interesting extensions to this project. How do we partition a program into its client- and server-side halves, or its client-, server- and database-side thirds? Can we tune the partitioning given a set of requirements for security, interaction latency, client code size, and so on?

(There is also an interesting converse to this problem. Most programming languages' standard libraries are designed around a Unix-like model of I/O. And the first programs we teach---like the Hello World program---use this facility explicitly, by printing to streams and reading from them. But we now live in a world where most familiar forms of computing don't have an obvious terminal- or stream-style of I/O evident in their interface. So perhaps clinging to these examples is creating a barrier in front of potential students---who won't relate to the idea of a program doing I/O through a terminal?)

At SPLASH, I discovered that one chunk of my proposed student project effort has been scooped by Emscripten, an LLVM-to-Javascript compiler. However, since writing such a compiler would be too much work for a single project anyway, this might actually be helpful in enabling a single student project to achieve more working stuff. In other words, they could focus on matters other than the compiler, or on doing interesting domain-specific analyses on user code. Alternatively, perhaps a keen student could try to make their own compiler that does a better job than Emscripten, in some way. Hopefully I will managed to advertise the project in time for the next academic year.

[/research] permanent link

Tue, 13 Dec 2011

Load addresses

For reasons I will only hint at, I want to predict the load address of a set of shared libraries, given an executable that links against them. Naturally, I have turned off address space layout randomization.

At first, I thought I could use ldd for this. It seems to work.

$ ldd /usr/local/src/git-1.7.5.4/git
        linux-vdso.so.1 =>  (0x00007ffff7fdd000)
        libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007ffff7dc3000)
        libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007ffff7ba5000)
        libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007ffff7806000)
        /lib64/ld-linux-x86-64.so.2 (0x00007ffff7fde000)

But also, there is an environment variable called LD_TRACE_LOADED_OBJECTS that is supposed to have the same effect. As it happens, ldd is just a small wrapper script which sets this variable and invokes the dynamic linker, which on my system is /lib64/ld-linux-x86-64.so.2. Let's try doing this directly.

$ LD_TRACE_LOADED_OBJECTS=1 /usr/local/src/git-1.7.5.4/git
        linux-vdso.so.1 =>  (0x00007ffff7ffb000)
        libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007ffff7bc4000)
        libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007ffff79a7000)
        libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007ffff7608000)
        /lib64/ld-linux-x86-64.so.2 (0x00007ffff7ddc000)

That seems to work too. But wait! It's given us different load addresses than ldd did. Have I really turned off randomization? Well, yes. In fact, repeating either of these commands will reliably yield the output above, and they are reliably different from one another. What is going on?

Let's hack ldd so that it prints exactly what command it is going to execute.

$ ldd /usr/local/src/git-1.7.5.4/git
About to eval:  LD_TRACE_LOADED_OBJECTS=1 LD_WARN= LD_BIND_NOW= LD_LIBRARY_VERSION= LD_VERBOSE= /lib64/ld-linux-x86-64.so.2 /usr/local/src/git-1.7.5.4/git
verify_out is 
        linux-vdso.so.1 =>  (0x00007ffff7fdd000)
        libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007ffff7dc3000)
        libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007ffff7ba5000)
        libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007ffff7806000)
        /lib64/ld-linux-x86-64.so.2 (0x00007ffff7fde000)

So, it has set a bunch of other environment variables to empty strings. They look innocuous enough. But also, it is invoking the loader directly, whereas we were just letting execve call the loader for us. Can we reproduce the result of ldd by running the same command it does?

$ LD_TRACE_LOADED_OBJECTS=1 LD_WARN= LD_BIND_NOW= LD_LIBRARY_VERSION= LD_VERBOSE= /lib64/ld-linux-x86-64.so.2 /usr/local/src/git-1.7.5.4/git
        linux-vdso.so.1 =>  (0x00007ffff7fdd000)
        libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007ffff7dc3000)
        libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007ffff7ba5000)
        libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007ffff7806000)
        /lib64/ld-linux-x86-64.so.2 (0x00007ffff7fde000)

Yes, we can. Now, the big question: which one is correct? Let's run our program under gdb and inspect the memory map.

$ gdb --args /usr/local/src/git-1.7.5.4/git
... snipped ...
(gdb) break main
Breakpoint 1 at 0x404bf0: file git.c, line 509.
(gdb) run
Starting program: /usr/local/src/git-1.7.5.4/git 
[Thread debugging using libthread_db enabled]

Breakpoint 1, main (argc=1, argv=0x7fffffffddd8) at git.c:509
509     {
(gdb) print getpid()
$1 = 27023
(gdb) shell cat /proc/27023/maps | grep 'lib.*\.so.*'
7ffff7608000-7ffff779d000 r-xp 00000000 08:07 356590                     /lib/x86_64-linux-gnu/libc-2.13.so
... snipped ...

So, libc-2.13.so has been loaded at address 0x7ffff7608000, which is what we got from running with just the LD_TRACE_LOADED_OBJECTS flag set, and not what we got with ldd or with invoking ld-linux.so.2 specially.

Why the difference? Clearly, first execveing the loader perturbs the address assignment. It's not clear why this should be---isn't the loader itself the first thing to be loaded anyway? I'm not yet sure what is going on.

Another question: is predicting the load address even a sound thing to do? Given that we had to disable randomization in the first place, it seems like a bad idea. In my case, this approach will do for now, but ultimately I should defer my work until application start-up time. Then we can discover the actual loaded addresses of the various libraries, which is much more robust.

[/devel] permanent link

Mon, 05 Dec 2011

Refactoring refactoring

A little while ago I embarrassed myself in conversation by blurting out a sceptical opinion of refactoring. In this post I'll explain some opinions on refactoring and related research, hopefully in a more considered and coherent manner that I managed on that occasion.

I admit that my inclination to be a bit negative comes from prejudice, with a couple of origins. One is that a while ago, I had to fend off (rhetorical) claims that refactoring solved what my PhD was doing. It clearly didn't, but then again, it was well worth writing an explanation of why not. (These claims were from my supervisor, not my examiners, thankfully, and I think were being advanced rhetorically.) In that context, interface evolution scenarios were the issue. I still contend that refactoring is not the solution to interface evolution. (Endlessly editing code to stay in sync with “one true” “current” version of an interface, whether with or without the help of refactoring tools, is an unnecessarily burdensome approach; an automated assist for editing code doesn't make that editing work- or risk-free.)

More refactoring

Happily, most uses of refactoring are quite different from interface evolution: they're about the internal structure of code, not “edge” interface details. I'm a big fan of refactoring in these cases. As a practitioner I'd love to have more and better refactoring. In that space, one clear improvement would be refactoring tools for more languages. This doesn't mean starting again; most languages are more alike than they are different. At the moment, the popularity of refactoring serves to cement the Java hegemony. This is my other unreasonable prejudice: I dislike this hegemony, and so refactoring culture is tained by association (unfairly) in my mind. It'd be really nice to have some decent refactorings available for C++, but I'm not holding my breath. That said, I might not know about them if they do exist.

(Aside: the real problem with C++ is not pointer arithmetic or lack of garbage collection or lack of type safety or anything else that people usually trot out; it's complexity. I'll rant about that in a future post. By contrast, Java does well because it's a simple language. Actually it does unfairly well because researchers abuse its syntax tree as an intermediate representation for program analyses. I might rant about that in yet another post.)

That's my practitioner's view over with. As a researcher, I have one qualm remaining about refactoring. Rather than doing research on making refactoring work in more and more scenarios, I want to see some exploration of a few bigger ideas in the same general space. There are ideas that are more powerful, more “revolutionary” than refactoring but have (so far) less currency.

Language-independent refactoring

Language-independent refactoring is an obvious goal. The means by which to achieve it is less obvious. A shared metamodel seems sensible. The need for a shared metamodel is arguably a limitation. But I don't buy that argument! My reasoning is based on the observation that most languages have large sets of features that are cognate. By this I mean they are not just theoretically equivalent in what they can express (or perhaps not at all equivalent in that way), but rather, a human user understands them in the same way. (I wish we had the empirical foundations to substantiate that, but that's another matter.) So if we can capture these, we can probably come up with a metamodel that works well in practice, even if it fails for adversarially-constructed cases.

Conversely, just to throw another controversial claim into the mix: languages that have such a pared-down set of primitives that they don't offer a cognate of some recurring feature---like purely functional languages without mutable objects, or C without virtual calls---in practice do have cognates, but appearing as patterns of use rather than delineated language features. So I seem positing some sort of Chomskyan “universal language feature set” that is in programmers' minds if not explicitly in all languages. That feels a bit strong; I'll have to take some time to see whether I agree with myself there.

(As an aside: of course, many languages have too many features, in that they are mutually cognate: do I use static variables, or a singleton object, e.g. in Java? Do I use typeclasses or traits, e.g. in Scala? Do I specialise using template specialisation, overloading or overriding in C++? These self-cognate features usually have arbitrary limitations, and their diversity exists for implementation reasons. Exposing a choice among them leaks implementation details to the programmer, along with consequent performance characteristics. So, forcing the programmer to select among these early on, as these languages all do, is an evil akin to the evil of premature optimisation. Fortunately, it's an evil that refactoring exists to fight!

(Continuing the aside: we perceive “clean” languages to have few mutually cognate features. Conversely, most mutually-cognate features differ along some potentially-separable dimensions: each feature mixes a particular setting on each dimension. For the “static versus singleton”, having a chunk of data that is “one per program instance” is the main concern, and dynamic-versus-static allocation is the orthogonal issue that is unhelpfully mixed with it. In a Java-like implementation, object identity is another mixed concern: it's something you get in the singleton case, not the static field case, and effectively for reasons of implementation leakage. Conversely, in C and C++, statically-allocated stuff can still have its address taken, so there is better separation of concerns in that case.)

Non-behaviour-preserving transformations

Digging deeper than language-independent refactoring, it seems that refactoring's main value is in its ability to improve code by reversing bad decisions that were somehow expedient earlier. But underneath that, there are two cases. Firstly there are cases where you refactor because you got the abstract design wrong earlier (e.g. you assumed there was only one Frob per Widget, and in fact there might be many). Secondly are the cases where you got the abstract design right, but the code-level design wrong, i.e. you didn't map the abstract design optimally onto language features (with respect to maintainability, efficiency, ...). To me, it feels like there is too much emphasis on the second case, while the first one is harder and more interesting.

I think this is because automated refactorings aim to be behaviour-preserving. But since the two problems are very close---they both arise from forced premature commitment and the programmer's failure to anticipate the future---we should perhaps use the same tools to combat both of them. In other words, the requirement that refactorings should be behaviour-preserving actively limits what we can do. So how about some bolder approaches that might sidestep the problem? While these approaches might violate the letter of the definition of refactoring, for me, they retain the most useful charateristic of refactoring: by a single localised change, we can effect global changes on our codebase.

The only work I know that does automated non-local code edits that can change program behaviour is Coccinelle, based on the “semantic patch” idea. Aspect-oriented programming is a similar technique, but works by effectively (and controversially) delocalising run-time semantics rather than performing non-local code edits. I'd like to know if there are others more like Coccinelle already in existence.

So, suppose we discard the restriction of limiting ourselves to behaviour-preserving edits. One direction under this auspice is to creep closer towards model-driven development. I want the ability to change my “model” (either in my head, or in some modelling notation) and see changes reflected in source code. And also, vice-versa: if we do have a modelling notation, code changes should update the model. This is a hard but interesting problem in bidirectional transformation, which has something of a currency at the moment (witness the BX workshop).

Logic metaprogramming

A final thought is about logic metaprogramming. This is a very cool idea that I have not yet got up to speed on. In fact, almost all I know about it is from the abstract of a paper I saw at SPLASH last year, which said: “In logic metaprogramming, programs are... derived from a deductive database.” But this one sentence is so intriguing that I want to run for a while with what I think it might entail, before I find out what is actually done in existing systems (of which there are few!).

I've often wanted to program not by writing code directly---since I'm often aware that the code I'm writing will probably turn out “wrong” or “bad” once I've done a bunch more coding---but by making a sequence of simpler statements that I have more confidence in. Each statement should be small, freestanding and less of a commitment than writing a line of code would be. These statements might be such that none of them, by itself confers enough information to write a "known good" piece of source code. E.g. I might write that each instance of class A “has a[n]” associated instance of class B, but I don't yet know whether this association should be expressed as pointers, or by some associative data structure, say. This decision could be determined later, by solving constraits originated by other small statements. Ties could be broken (i.e. multiple candidate solutions selected among) by extrafunctional requirements such as performance (which might favour pointers over associative structures).

This is related to program synthesis and refinement methodologies, I guess. But I am particularly interested in making it exploratory. By having a tool explore the implications of the programmer's statements, we can potentially refine our understanding of the problem (a.k.a. “debug the design”) without going through the circuitous path of first writing some “bad” code and then either finding it's not the right design (and cleaning it up by heavyweight code changes) or finding it's incidentally messy (and cleaning it up, just by automatic refactoring if we're lucky). We can also have a tool tell us what the “right way” to code something is, but early. If the only solution to a particular set of requirements is to use a particular language feature, then the tool can tell us this, rather than letting us find it out by making the wrong choice and then backtracking. Of course, we need to get the requirements right up front, so this technique will only ever be a complement to more backtracking-oriented techniques.

Multi-dimensional representations of software

It is a very classical notion that programs have one true form, being their source code in the form of a string of symbols. Refactoring sticks with this idea but tries to make it easier to alter that form, by abstracting and automating certain common complex non-local edit patterns. But we can go further by rejecting the notion of “one true form” altogether, at least in the sense that that form is manipulated by programmers.

Of course, this is the MDSoC holy grail. I think the idea is just slightly too big for its own good, at present. Ironically, or fittingly, it has not been decomposed properly: aspects, refactoring and typeclasses are the main programming weapons that share its spirit, but none has its power or elegance. It's a shame that work on the idea seems to have fizzled out. (It's also a shame that the paper isn't on more reading lists!)

Somewhat relatedly, there's been some interesting work on subjective/transient/dual encodings of language features, as with the registration-based stuff at last year's Onward!, or Rob Ennals' Jekyll. But I'm still not aware of any mature tools that can really rip apart the modular structure of code and transform it on demand. Perhaps one problem is that we need to be able to define what primitive entities these queries “select”, and how they reformulate them into the appropriate bigger chunks---ideally in a language-agnostic way. So it's back to the shared metamodel. Again, better understanding of “cognate” language features, and indeed of less intuitive correspondences between language features (like the nontrivial correspondences between algebraic data types and class hierarchies), will help here.

[/research] permanent link

Guided by folklore

In a recent chat with my internal examiner, Andy Rice, I had a few thoughts which I decided to write down. It turns out he reads my blog---along with (his words) “everyone in the department” ---so, hi Andy and everyone. One day I might stop writing as if my audience consists only of myself, but not right now.

In summary, I want to rant about two weird things that go on in the research world. One is that there are some memes that seem to have a real influence on how PhDs are examined, but seem to have no origin other than folklore, and are different from the standards used to judge other research. The second rant, and perhaps the more oft-repeated, is that we actively encourage boring research.

(I should add that although this post is rather ranty, the chat was not an argumentative one. So, this is mostly post-hoc ranting about related topics, and not a direct reflection of our conversation.)

A thesis is judged on criteria from folklore, beyond what applies to “normal” research. At various points in my PhD, I heard it said that “a thesis should... [do X]”. Usually, X was something to do with telling a complete story, strongly substantiating a succinct hypothesis, and so on. And now I have heard the same from my examiners. Unfortunately, these statements continue to be just that---hearsay. They're different from the ways in which other research is judged. There are no regulations or official guidance to support them. There are no clear scientific or moral justifications for them either. The research community happily publishes many papers that do not tick these boxes, and at good venues. My own OOPSLA '10 paper is one example, but there are lots of others. But despite this, PhD examination seems to give a lot of currency to these criteria, for apparently no reason other than their having been handed down through the generations.

During my PhD I didn't worry myself much about this, since, like most researchers, I don't put much weight on unsourced claims. Besides, there seemed to be enough data downplaying their significance anyhow---several other theses seemed to break the rules, and plenty of published, respected research papers did too. Surely if a PhD is training for research, the qualifying criterion should be focused on doing good research? From my very limited experience, and from what I gather from listening to others, this is not how things currently are. Fortunately, I am of the bloody-minded type. I was aware that I might be “creating trouble” for myself, but I personally preferred to risk creating that trouble, thereby at least gathering some evidence about it, rather than swerving to avoid an obstacle that was at best nonexistent (I didn't know it would cause trouble) and at worst, worth challenging. So, consider it challenged! If you think a thesis needs to be anything more or different than good research, I challenge you to justify that position.

Now, on to my second rant. The evaluability problem has an irrational hold on many practical computer scientists, to the extent that research into many important problems is deliberately avoided. I spoke to many experienced researchers about my PhD work as it went along. Several of them suggested that I might have some trouble at examination. This seemed odd to me, for the reasons I just ranted about. Nevertheless, I didn't disbelieve them. But I had no intention of applying the fix they suggested. Rather than developing an alternative evaluation strategy or (the best advice in hindsight) to maximise the persuasiveness of the presentation of whatever evaluation data I did have, the only “advice” I ever received on this point was a not-so-veiled encouragement to abandon my current problem and work on something else. “Up and to the right” was what one researcher told me---about the kind of graph that should be in my evaluation chapter. (My evaluation chapter has no graphs, and is staying that way.)

This attitude is the tail wagging the dog. If a problem is important, and we do some research that is not conclusive, we should damn well work harder at it, not give up. The problems and curiosities of humankind are not regulated by how easy it is to collect data and draw graphs about them. If we avoid working on important but difficult-to-evaluate problems, or discourage such work, it shows the worst kind of ivory tower mentality. It is far from a pragmatic position, despite how (I'm sure) many of its adopters would try to spin it. What is pragmatic about ignoring the real problems?

I'm not downplaying the importance of evaluation. It goes without saying that measuring the value of innovations is important. Moreover, our ability to measure is something we need to work on actively. After all, many of those physicists and other “hard” scientists seem to spend nearly all their time working out ways of measuring stuff. So I'm completely in favour of rigorous evaluation. On the other hand, I'm not sure that a lot of evaluation that currently passes muster is really rigorous anyway. We need to recognise evaluation as a problem in its own right, whose hardness varies with the problem---and make allowances for that. For many hard problems, evaluation of a solution is comparably hard. That shouldn't mean that we give up any attempt to tackle those problems. The preference for conclusive results in published research has a deceptive influence, being essentially the same phenomenon as the “decline effect”, described in this very interesting article from the New Yorker.

There are some other problems with evaluation in particular kinds of CS research. One is what I call “evaluation by irrelevant measurement”: if you develop something that is supposed to help programmers, but you can't measure that, how about measuring its performance or proving its type-soundness? It says nothing about whether you've achieved your goals, but it still ticks those evaluation boxes. And of course we have a big problem with reproducibility of experimental results---at the VMIL workshop at SPLASH, Yossi Gil gave a great talk about the non-reproducibility of VM-based microbenchmarks, and Jeremy Singer's Literate experimentation manifesto was a nice counterblast to the wider problem.

I have found programming language researchers to be more sympathetic than “systems” researchers to work “towards” a goal, as distinct from work telling a complete story about some problem. This is partly because the nature of programming language research makes reliable evaluation a very high-latency endeavour. In other words, until real programmers have used your idea in a large number of projects, there will be no clear experience about how well it works. So, being computer scientists, we mitigate that latency, using pipelining. Rather than a slow stop-and-forward algorithm which waits 20 years between research projects, we have to be more amenable to two approaches: argument, in the sense of paying attention to the reasoning that justifies the approach of a particular piece of work, and speculation, meaning allowing the research discourse to explore many alternative approaches concurrently, and letting time tell which ones will “stick” out of the many that have been given a chance. The job of the researcher is less to conclusively show a problem as solved, but to show that a technique is feasible and has some potential for wide and successful application.

Going back to the first point, perhaps I should add that I'm not saying that my thesis would have stood up any more strongly by “good research” criteria. But having said that, a very large chunk of it appeared at a top-tier venue, so it can't be all that bad. Both of my examiners seemed to miss this fact, so the lesson is: always put a prominent summary of your publications in your thesis! Personally I can be very critical of my thesis work. But it seems bizarre to me that folklore should have so much sway in the way that theses are examined.

[/research] permanent link

Thu, 01 Dec 2011

Weak dynamic symbols

Although I know more than the average bear about linkers, there's always things I don't know. Until now I never had cause to understand the following: how does the linker know which symbols to link dynamically, and which to link statically?

A bit of head-scratching reveals the only possible answer. Given a command-line, it does the usual thing: look at the command-line options, perform the standard library look-up procedure, gathering a list of object files---some static, some dynamic. If a symbol is defined by a dynamic library, make it dynamic. Otherwise, it stays static.

That sounds fairly sensible. But it can mean surprises. Suppose you have a C program that wants to support an optional feature to be linked in at load time. You might write something like the following.
int optional_function(int arg1, void *arg2) __attribute__((weak));

/* ... */

void do_something(void)
{
    if (optional_function) optional_function(42, &some_obj);
    /* else skip the optional part... */
}

If you pull this sort of trick within a shared library, it works fine. But inside an executable: no dice! If you compile this into an executable and look for optional_function in your dynamic symbol table, you'll be disappointed.

$ objdump -T my-program | grep optional_function
$

What is going on? Well, it's in the static symbol table, silly.

$ objdump -t my-program | grep optional_function
0000000000000000  w      *UND*  0000000000000000          optional_function
$

What does it mean to have an undefined symbol in your executable's static symbol table? It means it will silently take the value zero! In fact, the relocation records referencing your symbol have already been discarded.

$ objdump -rRd my-program | grep -A1 -B1 callq
(snip)
--
  400549:      bf 2a 00 00 00        mov    $0x2a,%edi
  40054e:      e8 ad fa bf ff        callq  0 <__init_array_end>
  400553:      b8 00 00 00 00        mov    $0x0,%eax
--
(snip)

Cheerily, the linker has inserted a direct-bound call to address zero in your code. That's not what we want! So, how can we fix it?

The trick is in the linker's (or at least the GNU linker's) --dynamic-list option. First, create a file called whatever you like (mine's called dynamic-list), containing the following.

{ optional_function; };

Now link your program passing --dynamic-list <your-dynamic-list> to the linker.

gcc -Wl,--dynamic-list -Wl,<your-dynamic-list> -o my-program my-program.c

Hey presto! You should now have your weak symbol in the dynamic symbol table.

$ objdump -t my-program | grep optional_function
0000000000000000  w   D  *UND*  0000000000000000          optional_function
$

That's a bit ugly. Recalling the linker behaviour I described at the beginning, the simpler way to do it is just to link your executable against a shared library defining optional_function.

You might wonder (as I do): what is the point of putting undefined symbols in an executable's static symbol table? Once the executable is output, it's too late to link anything with them. Surely they should all be “promoted” to dynamic symbols? [Update, 2012-5-19: there is of course a linker option for doing this, which in the GNU case is --export-dynamic. Still, I'm not sure why it isn't the default.]

It would also be nice to have an objcopy option for adding dynamic symbols in this way, so we can do it after the fact, rather than changing the linker command like we did above. However, this is nontrivial for the reason I mentioned---the relocation records that you would want have already been eliminated. So, we would need to re-create them. This is similar to something I began work on before. At some point I might resurrect my objcopy patches and try to repurpose them to this problem. For now, I will just hack in the extra linker options.

[/devel] permanent link

Sat, 12 Nov 2011

Static versus dynamic analysis---an illusory distinction?

When writing a recent talk, I found myself arguing that static and dynamic analysis are not really that different. At least, people don't really agree on the distinction. Model checking people frequently argue that what they're doing is dynamic analysis, because it directly explores paths through a system's state space. Meanwhile, abstract interpretation people would argue the opposite, since clearly model checking is an instance of abstract interpretation, and so is all other static analysis.

I'd much rather avoid the debate entirely. Since model checking is a far cry from run-time checking or testing, my sympathies initially lay with the abstract interpretation camp on this particular issue. But the distinction becomes even more ill-defined in other cases. In particular, I've been thinking a lot about symbolic execution, of the kind done by KLEE and other tools. Is it doing a static or a dynamic analysis? I'd challenge you to defend either position.

(Meanwhile, execution environments which are somewhat speculative, like transactional memories, lazy evaluators, or even plain old branch prediction, can be considered as partial runs of a static analysis. But I should save that line of thinking for another post.)

So rather than talking about static or dynamic analyses, here are some dimensions of analyses that I think are more interesting.

That's all for now. Let me know if you can think of any more noteworthy dimensions!

[/research] permanent link

Thu, 06 Oct 2011

LLVM structural typing

I'm learning about the LLVM compiler infrastructure at the moment.

LLVM bitcode includes a notion of data types. These are used to control implicitly the size and encoding of values generated by various operations, to hint about mappings to underlying machine data types (e.g. on architectures that distinguish floating-point from integer registers) and to implicitly cause certain transformations to be effected, such as padding or sign extension. (I'm not yet sure whether all such operations need to be explicitly rendered as an LLVM “bitcast” operation or not. At least, LLVM's notion of types can be used to define validity of these operations, whether or not they happen implicitly.)

Moreover, addresses (pointers) are typed according to the type of the values they reference (point to). The data types are in this sense higher-order. (This is a weaker case of “higher-order” than types specifying the behaviour of functions. But it has some things in common. I will blog about this more in the future.) These data types control implicitly how much data is read or written by indirect loads and stores.

A typical C front-end will encode C data types directly into this type system. However, this is just a convenience. The encoding discards some of the semantics of the C type system, because in LLVM, composite types are treated purely structurally, whereas in C, they are always treated nominally. Consider this program.

#include <stdlib.h>

struct Foo {
  int a;
  int b;
};

struct Bar {
  int x;
  int y;
};

int main(void)
{
  struct Foo *f = (struct Foo *) malloc(sizeof (struct Foo));
  struct Bar *b = (struct Bar *) malloc(sizeof (struct Bar));

  free(f);
  free(b);

  return 0;
}

In LLVM bitcode, using llvm-gcc, we get the following.

; ModuleID = 'test.o'
target datalayout = "e-p:64:64:64-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:64:64-f32:32:32-f64:64:64-v64:64:64-v128:128:128-a0:0:64-s0:64:64-f80:128:128-f128:128:128-n8:16:32:64"
target triple = "x86_64-unknown-linux-gnu"

%struct.Bar = type { i32, i32 }
%struct.Foo = type { i32, i32 }

define i32 @main() nounwind {
entry:
  %retval = alloca i32
  %0 = alloca i32
  %f = alloca %struct.Bar*
  %b = alloca %struct.Bar*
  %"alloca point" = bitcast i32 0 to i32
  %1 = call noalias i8* @malloc(i64 8) nounwind
  %2 = bitcast i8* %1 to %struct.Bar*
  store %struct.Bar* %2, %struct.Bar** %f, align 8
  %3 = call noalias i8* @malloc(i64 8) nounwind
  %4 = bitcast i8* %3 to %struct.Bar*
  store %struct.Bar* %4, %struct.Bar** %b, align 8
  %5 = load %struct.Bar** %f, align 8
  %6 = bitcast %struct.Bar* %5 to i8*
  call void @free(i8* %6) nounwind
  %7 = load %struct.Bar** %b, align 8
  %8 = bitcast %struct.Bar* %7 to i8*
  call void @free(i8* %8) nounwind
  store i32 0, i32* %0, align 4
  %9 = load i32* %0, align 4
  store i32 %9, i32* %retval, align 4
  br label %return

return:                                           ; preds = %entry
  %retval1 = load i32* %retval
  ret i32 %retval1
}

declare noalias i8* @malloc(i64) nounwind

declare void @free(i8*) nounwind

Notice that although the compiler has emitted two LLVM type definitions, one for each of our struct types, it then proceeds to use only the first one of them. The second is redundant, because the two are structurally equivalent. This starts to look even more peculiar when we make our data types recursive.

#include <stdlib.h>

struct Foo {
  int a;
  int b;
};

struct Bar {
  int x;
  int y;
};

struct FooRecursive {
  int a;
  struct FooRecursive *next;
};

struct BarRecursive {
  int a;
  struct BarRecursive *next;
};

int main(void)
{
  struct Foo *f = (struct Foo *) malloc(sizeof (struct Foo));
  struct Bar *b = (struct Bar *) malloc(sizeof (struct Bar));

  struct FooRecursive *fr = (struct FooRecursive *) malloc(sizeof (struct FooRecursive));
  struct BarRecursive *br = (struct BarRecursive *) malloc(sizeof (struct BarRecursive));
  
  free(f);
  free(b);
  free(fr);
  free(br);
  
  return 0;
}

This gives us the following.

; ModuleID = 'test.o'
target datalayout = "e-p:64:64:64-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:64:64-f32:32:32-f64:64:64-v64:64:64-v128:128:128-a0:0:64-s0:64:64-f80:128:128-f128:128:128-n8:16:32:64"
target triple = "x86_64-unknown-linux-gnu"

%struct.Bar = type { i32, i32 }
%struct.BarRecursive = type { i32, %struct.BarRecursive* }
%struct.Foo = type { i32, i32 }
%struct.FooRecursive = type { i32, %struct.BarRecursive* }

define i32 @main() nounwind {
entry:
  %retval = alloca i32
  %0 = alloca i32
  %f = alloca %struct.Bar*
  %b = alloca %struct.Bar*
  %fr = alloca %struct.BarRecursive*
  %br = alloca %struct.BarRecursive*
  %"alloca point" = bitcast i32 0 to i32
  %1 = call noalias i8* @malloc(i64 8) nounwind
  %2 = bitcast i8* %1 to %struct.Bar*
  store %struct.Bar* %2, %struct.Bar** %f, align 8
  %3 = call noalias i8* @malloc(i64 8) nounwind
  %4 = bitcast i8* %3 to %struct.Bar*
  store %struct.Bar* %4, %struct.Bar** %b, align 8
  %5 = call noalias i8* @malloc(i64 16) nounwind
  %6 = bitcast i8* %5 to %struct.BarRecursive*
  store %struct.BarRecursive* %6, %struct.BarRecursive** %fr, align 8
  %7 = call noalias i8* @malloc(i64 16) nounwind
  %8 = bitcast i8* %7 to %struct.BarRecursive*
  store %struct.BarRecursive* %8, %struct.BarRecursive** %br, align 8
  %9 = load %struct.Bar** %f, align 8
  %10 = bitcast %struct.Bar* %9 to i8*
  call void @free(i8* %10) nounwind
  %11 = load %struct.Bar** %b, align 8
  %12 = bitcast %struct.Bar* %11 to i8*
  call void @free(i8* %12) nounwind
  %13 = load %struct.BarRecursive** %fr, align 8
  %14 = bitcast %struct.BarRecursive* %13 to i8*
  call void @free(i8* %14) nounwind
  %15 = load %struct.BarRecursive** %br, align 8
  %16 = bitcast %struct.BarRecursive* %15 to i8*
  call void @free(i8* %16) nounwind
  store i32 0, i32* %0, align 4
  %17 = load i32* %0, align 4
  store i32 %17, i32* %retval, align 4
  br label %return

return:                                           ; preds = %entry
  %retval1 = load i32* %retval
  ret i32 %retval1
}

declare noalias i8* @malloc(i64) nounwind

declare void @free(i8*) nounwind

Notice that the self-referencing structure of FooRecursive has been lost, again because a different type is structurally equivalent.

Now for a final experiment: what about singleton structs? Are they structurally equivalent to a single element? I'll throw in a typedef too, to see whether that appears.

#include <stdlib.h>

struct Foo {
  int a;
};
typedef int Baz;

int main(void)
{
  struct Foo *f = (struct Foo *) malloc(sizeof (struct Foo));
         Baz *b =        (Baz *) malloc(sizeof        (Baz));
  
  free(f);
  free(b);
  
  return 0;
}

Here's the code it generates.

; ModuleID = 'test.o'
target datalayout = "e-p:64:64:64-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:64:64-f32:32:32-f64:64:64-v64:64:64-v128:128:128-a0:0:64-s0:64:64-f80:128:128-f128:128:128-n8:16:32:64"
target triple = "x86_64-unknown-linux-gnu"

%struct.Foo = type { i32 }

define i32 @main() nounwind {
entry:
  %retval = alloca i32
  %0 = alloca i32
  %f = alloca %struct.Foo*
  %b = alloca i32*
  %"alloca point" = bitcast i32 0 to i32
  %1 = call noalias i8* @malloc(i64 4) nounwind
  %2 = bitcast i8* %1 to %struct.Foo*
  store %struct.Foo* %2, %struct.Foo** %f, align 8
  %3 = call noalias i8* @malloc(i64 4) nounwind
  %4 = bitcast i8* %3 to i32*
  store i32* %4, i32** %b, align 8
  %5 = load %struct.Foo** %f, align 8
  %6 = bitcast %struct.Foo* %5 to i8*
  call void @free(i8* %6) nounwind
  %7 = load i32** %b, align 8
  %8 = bitcast i32* %7 to i8*
  call void @free(i8* %8) nounwind
  store i32 0, i32* %0, align 4
  %9 = load i32* %0, align 4
  store i32 %9, i32* %retval, align 4
  br label %return

return:                                           ; preds = %entry
  %retval1 = load i32* %retval
  ret i32 %retval1
}

declare noalias i8* @malloc(i64) nounwind

declare void @free(i8*) nounwind

Predictably, the typedef has gone away entirely, because it introduces no new structure. However, our singleton struct has stayed around. This isn't surprising either, because LLVM has instructions for accessing field members, whose semantics are affected by these structural differences. Composite types are not just sugar for arrays of bytes or words.

This does mean that if we wanted to encode nominal types into our LLVM bitcode, we could do it by wrapping nominally distinct types in differing depths of layered singleton structs. This would affect the bitcode that came out, e.g. inserting extra GetElementPtr operations, but shouldn't affect the optimised compiler output.

Overall, we can say that LLVM's data types are an annotation useful primarily for propagating and/or inferring data size and encoding information contextually through load and store operations. They are also used for checking: the bitcode is checked for first-order type errors. Since they are raised at the point of pointer use (e.g. a pointer assignment without appropriate bitcast is an error), they can catch likely first-order errors early, i.e. when a pointer is generated or stored, rather than later when it is dereferenced and its target data is misinterpreted. Here by “first-order type errors” I roughly mean those at the machine level, meaning they always concern the misinterpretation of bit-patterns encoding primitive values (integers, addresses, floating-point numbers). Since nominally distinct data types are conflated when they are structurally equivalent, then without using the encoding trick I mentioned, the bitcode will not capture, say, that one variable encodes polar coordinates by x and r, and another by r and theta. Detecting violation of these abstractions is beyond the reach of any analysis based (only) on LLVM data types using the standard encoding.

This includes dynamic analyses. Right now I'm writing an is_a function for KLEE. KLEE is (effectively) a fancy LLVM interpreter. Without recourse to the source code and/or hacking the C front-end, we can only do structural is_a, which is slightly disappointing. I should add that I'm not criticising LLVM here at all. Intermediate representations are not the place for fancy type systems, and the structural approach works nicely. It just means more work for me, when it looked for a moment as though I could abuse pre-existing functionality.

[/devel] permanent link

Mon, 22 Aug 2011

Pipelines (are lazy functional composition with recombination)

Unix pipelines are often held up as a paragon of compositional virtue. It's amazing how much they can do, how efficiently and how (relatively) simple to use they can be.

Programming languages people especially like this sort of thing, and functional programming clearly has some synergy with pipelines---which are, clearly, a kind of functional composition. Microsoft's F# language even uses the pipe to denote functional composition.

But pipes are not just compositions of functions mapped over lists. They have two other properties that are critical. The first is well-known, and the second less so.

First is laziness. Since each process in the pipeline explicitly forces its input, pipelines can be used on infinite streams (or portions thereof) just the same as with finite data.

The second is what I call recombination. Each stage in the pipeline can have radically different ideas about the structure of data. For example, pipelines typically combine characterwise (tr) linewise (sed) and whole-file (sort) operations in one go.

This is much harder to achieve in a functional setting because you don't have access to an underlying representation (cf. character streams on Unix): if your part of the pipeline understands data differently than the previous one, you have to map one abstraction to the other by writing a function.

Meanwhile, the shell programmer has a lower-level set of tools: familiar recipes for dealing with lower-level idioms such as line-breaking and field separation that recur across many different kinds of abstract data but may be recombined using the same tricks in each.

The upside of the abstract, functional approach is that you preserve the meaning of your data, even in the face of some heavyweight transformations. Meanwhile, shell programmers are often driven to frustration by their separator-munging not working properly once things get complex.

The downside is that it's more work, and more repeated work, because you might (in the worst case) have the quadratic problem of having to map n abstractions to n-1 other abstractions. By contrast, simple pipelines can often be quickly hacked together using a few standard idioms. They can even be written to be robust to changes in the underlying abstraction (such as numbers of fields), although this takes some extra care.

A middle ground might be to characterise those idioms and formalise them as a functional abstraction, into which many higher-level abstractions could be serialized and deserialized, but without going all the way down to bytes. Perhaps such a system already exists... it sounds a bit like a reflective metamodel of functional records or somesuch. Hmm... perhaps Lispers have been doing this for decades?

[/research] permanent link

Fri, 08 Jul 2011

In praise of (good) workshops

Publishing has at least two roles in science. On the one hand, it is a means of disseminating results, concretising the “progress” made by completed work. On the other hand, it is a source of feedback: it's a pillar of the peer review system that rejected papers receive feedback. Meanwhile, the idea behind conferences and workshops is, at least in theory, that each presentation will stimulate discussion and further feedback.

During my PhD I learnt the value of a pattern which seemed to suit my style of work, as follows. When work is under way, write an in-progress style of paper. This gathers early feedback. If it gets accepted (which it usually will, if the idea is good), you also get the benefit of the presentation and the subsequent feedback and discussion. Later, once you have results, write a full research paper to present them. Inevitably, you will have a lot more to say this time round. Some things will have changed, too. You will be able to write a better paper than before, because the earlier paper gave you some idea how to present the work and how to address its perceived weaknesses. (I admit I've only completed this bit of the cycle once so far! There is more in the pipeline....)

When I first went to a conference, I was surprised at how little conferring was going on. Many talks received only a couple of questions. Workshops, on the other hand---at least the good ones---set aside more time for discussion. Smaller audiences at workshops make it more likely that people will initiate discussion as a talk goes along. The lunch and other break times tend to have a more discussion-heavy vibe than those between conference sessions. This is perhaps, again, because a small number encourages more discussion. Also. the workshop group tends to “stick together”, rather than in a conference where people diffuse between sessions. I guess single-track conferences are better in this respect, but I've only been to one of those, and I don't recall a lot of high-quality discussion that time.

(Poster sessions are not bad either, for discussion, if your poster can grab people's attention. But they are painful to present at... never again, I have vowed.)

Recently I had it put to me by an experienced researcher that workshops are not worth bothering with: they're just for people who are starting out, or for less good work, and they stop you from publishing at a better venue. I sympathise because I've been to some bad workshops, and seen some decidedly poor “research” presented at them. But that's an argument for greater workshop participation, not less. Submitting interesting ideas to workshops, for discussion, is exactly what's supposed to happen. The reason that they degenerate into small conferences for mediocre-or-worse work is precisely because they don't get enough good submissions by good people. Some workshops are established, and get good participation, and work very well in that form.

Prior publication at workshops is a subtle thing, but in short, is not something I worry about. I have certainly seen workshops having (online) digital proceedings but from which it's common to see follow-up papers appear later at conferences. I'm not sure whether this is because workshop papers, being more preliminary presentations of work, simply “don't count” (an opinion I've heard voiced) or because those follow-up papers present quite a large delta. For the kind of work I do, a big delta is not hard to achieve anyhow---the contributions of the workshop paper would mostly be in argument, “position” or “idea”, together with perhaps some motivating experiments and preliminary results. Implementation and ensuing experimental work is saved for a full paper. Archival is cheap nowadays, so the convenience of having a printed proceedings accessible from the same place where we can find all the other papers shouldn't be seen as giving equal contribution-weight to these papers. (Suggesting otherwise seems to me to be endorsing a “numbers game” approach to the evaluation of research. Heaven forbid that we actually decide for ourselves what the contribution of some paper is, by reading it.)

I can see this split being less applicable for more theoretical work. The more abstract the formulation of the problem, the less there is to argue about. For practical work, discussing the problem set-up and high-level approach is very valuable. Even when work seeks to build big systems, the idea behind some work is often much bigger than the part that you are actually able to implement in practice. It's nice to have an opportunity for the bigger ideas to be discussed, reviewed and recognised.

A final reason for me to enthuse about workshops is that I'm one of the “little guys”. So far I've worked only on my own. I don't have big collaborative projects whose conference papers I can parachute onto. And I don't have very many coworkers who I can discuss my ideas in detail with. Workshops are a support infrastructure that I particularly need---for feedback, and also, perhaps slightly cynically, to maximise the exposure my work gets. Ultimately I want to convince people that my research vision is worth investing in. It's important that I take up opportunities for conveying my potential---which I believe to be great!---as well as what I've achieved, which will never match up to those who are habitual collaborators. Of course I'm not opposed to collaborating---far from it, but I just can't seem to find the right person....

[/research] permanent link

Wed, 06 Jul 2011

Functionality, not (just) reliability

I'm still a newcomer to verification. The more reading on verification I do, the more evident it is that most work in the area is interested in checking fairly boring properties of programs: usually that it doesn't crash, don't corrupt memory, or that some simple assertions hold. In this post I'll argue that thinking of a provable absence of these properties as “job done”, or even a primary goal, is counterproductive: it overlooks the hidden costs of verification, and overstates the value of such proof. There are far more important top-level goals than verifying these properties, yet there seems to be a real danger that research is overlooking these almost entirely.

To explain myself, I'm going to highlight three distinctions I've come up with about different kinds of work that can loosely be called “verification” or “formal reasoning about programs”. None of them are commonly mentioned, and certainly not by the names I've given them. But they seem fairly important to me, and in each case, one side of the divide is sorely neglected.

Positive specification versus negative specification

I call this kind of “doesn't fail” property “negative specifications”---they say what the software shouldn't do. I'm not trying to denigrate any work that verifies against these specs. It's often more than difficult enough to check these “boring” properties statically, without making things any more “interesting”. Nevertheless, focusing only on negative properties seems to neglect the classical goal of verification, which is to check that an implementation satisfies a specification which captures its intended functionality.

Tackling this means addressing “positive specifications”: what the software should do. This is a similar distinction to that between liveness properties and safety properties. But it is not the same: specifying liveness in the sense of “not deadlocking” or “not hanging” is still a negative specification. Really, what defines a positive property is that it has something to do with functionality, and not whether it's stated in a way that uses logical negation. We should aim to specify positive properties that capture an isolated facet of our system's intended functionality, expressed in application-level terms: perhaps small behaviours that a program simulates (but doesn't bisimulate), or small functional dependencies that a program reproduces.

Conservative versus “liberal” software construction

I had a hard time thinking of the right antonym to “conservative”, and “liberal” will have to do (as “reckless” was my other candidate).

Some classical approaches to programming have a highly conservative property that I call compulsory proof: the programmer cannot make progress without ensuring that some specification remains satisfied. Static typing is the most familiar example: type-checking is, from Curry-Howard, proving a property about your program. You can't do much with a program that doesn't type-check. Refinement-based synthesis methods are another example: a refinement step is only valid if it preserves the specification, and provably so.

As we scale up to larger and more complex programs, and larger and more complex specifications, these approaches start to inhibit progress. It's interesting how dependent types are still a “nearly there” feature---nobody has made them usable yet. I'm told that making programs type-check becomes harder and harder under increasingly dependent types. This is unsurprising: we're making the specifications (a.k.a. types) more complex, so harder to prove satisfaction for. It seems that the most pragmatic solution so far, is to relinquish the insistence on proof at all times, a compromise adopted by Edwin Brady's Idris.

The idea of compulsory proof is flawed because programming is usually exploratory. As a programmer, I often start coding before I fully understand the problem. The process of programming provides necessary illumination. So, presupposing any specification that I must doggedly adhere to at each stage, whether from the tyranny of a type-checker or the tyranny of a refinement procedure, is anathema---it actively stops me from acquiring the understanding I need to complete the task. In the worst case, compulsory proof is a net loss: it slows us down far more than it helps us. What's unfortunate is that our research evaluation methods don't account for this. No programming research considers the net worth to a human programmer. Instead it prefers the mathematical orthodoxy of derivable properties such as preservation of type safety. I suspect these are poorly correlated.

The polar opposite to these conservative approaches, which I call “liberal”, is exemplified by dynamic languages. Here no proof of adherence to any specification (type-based or otherwise) is statically enforced. The programmer is free to break his abstractions at any time---even at run time. The only specifications of that are enforced are those of pre-existing machine-level abstractions---integers, pointers and floating-point numbers---whose integrity is enforced by dynamic checks only.

Like most polar extremes, neither fully-conservative nor fully-liberal approaches are often optimal in practice. I think the consensus from mainstream languages (C, Java, C++) is that static checking using a simple type system is a good idea (and I'm sure even most die-hard Python programmers often yearn for some basic static checking). The jury is still out on the cost/benefit ratio of more complex type systems, even such as Java's generics. Ken Arnold vociferously argues against Java generics , and when I first read his opinion, I was sceptical of his view---surely more static checking can't hurt? Nowadays I can see his points about the cost in complexity and, particularly, in the proof burden on programmers.

Meanwhile, from the liberal end of the spectrum, bug-finding tools like KLEE are interesting: we liberally (or recklessly) allow bugs into our program, then try to whittle them down. It's bug-finding and not verification because we can happily specify properties that the tool can't conclusively show are not violated. A consequence is that in practice KLEE doesn't terminate for most nontrivial programs. On the other hand, the space of assertions that we can use KLEE to check is large: at least in theory it subsumes type-safety (noting that a specification of type-safety can be encoded by instrumenting an untyped program with assertions using a suitably-defined instanceof-like operator) and arbitrary application-specific assertions. There are a few problems with KLEE: it's limited by what we can express as assertions (or built-in properties); it doesn't terminate in enough cases; and it tends to find uninteresting bugs which, while not technically false positives, might well be if we could write a more refined input specification. Nevertheless, I like it because it treats perfection (i.e. proof) as the limit case, not the immediate goal. (It's no coincidence that I'm basically working on addressing these weaknesses right now, including making it terminate for more programs---or at least, I would be if I wasn't wrestling with LLVM bugs the whole time.)

Constructed reliability versus emergent reliability

Using any of these techniques takes time. Writing specifications takes time. Even if you say “my verifier requires no annotations!” (like a lot of Dawson Engler's work, including KLEE), you probably rely on assertions. Even if you restrict yourself to “belief”-style latent specifications (like Engler's 2001 SOSP paper), they got there by somebody writing code. If you rely on “mined” specifications, as recovered by a bunch of work (like this and this and this) you have to gather a corpus and run a mining algorithm and check for false positives and then use some other technique to hunt the false negatives.

In other words, despite half-claims to the contrary, we make our software reliable by pouring some amount of labour into it. New, exciting techniques are exciting because they deliver more with less labour.

But there is an appalling lack of holism here. (Aside: Thomas Ball's call for holism is an interesting read and I applaud it; here I like to think I'm going much further!) What other approaches do we have for making software reliable? How about making it easier to write the code, so that we have more labour to pour into the reliability side?

In other words, nonfunctional concerns trade off against each other, and also against functional concerns! Time spent coding new features is time taken away from static analysis or testing or any other reliability-focused work, and vice-versa. It's also time taken away from profiling and optimisation, from refactoring and documentation, and so on. So in other words, it all helps. Software's value isn't a sum of independent parts. The parts are interdependent; what benefits one benefits all. Reliability can be deliberately constructed by applying specific reliability-focused tools, and hacking at code until it passes muster by these tools. But also, it can emerge from superior development processes that made it easier for programmers to build in reliability in the first place.

Now let me lament my own position. In the research world, only a relatively narrow selection of approaches get any funding. Reliability is a perennially hot topic. It's unquestionedly considered sound research motivation to trot out lines about the importance of reliable cars or reliable trains or reliable nuclear power plants. Similarly, it's routine to trot out analogies with civil engineering, bridge-building and the like. Reliability is important, for sure. But thinking holistically, that doesn't mean we have to attack these problems by building tools with the narrow remit of making sure nothing bad happens in whatever code they're given. Improving all parts of the development process can contribute to these goals, and have untold other economic benefits in the process. Too many researchers' mission statements list reliable software as top priority. But reliability is just a means to economic (or “value”) gain. Hardly any researchers will say their goals are “quality software”, or “functioning software”, or “economical software development”. Why not?

A senior researcher here in Oxford recently pitched his work (to industrial visitors) by saying that verification is ensuring that “nothing happens”. I hope that was a gross simplification for pitching purposes, because we can do a lot better than that.

To finish, let me shamelessly bring on some of my current “background” research interest and activities. I'm interested in good ways of re-using existing code; good ways of adopting programming language innovations without rewriting the world; good editing tools (including refactoring and a whole lot more---I'll blog shortly there); good dynamic analysis tools, including a good debugger (again, I'll blog more shortly). Of course, I didn't manage to find a job on any of these ideas. Since my PhD, days, I've felt as though I was perhaps the only programming researcher in the world whose top priority is not specifically code that is reliable, per se, but the bigger yet seemingly more obvious goal of code that does what you want it to.

So, am I crazy? I took a job in verification because I thought I'd learn some program analysis skills that would be generally useful, including (later) application to the problems nearest my heart, not just to approaches sanctioned by the reliability orthodoxy. But it's a draining experience to be railing against orthodoxy all the time, especially when you feel like the only lunatic in the asylum. I'm not sure how much longer I can take it.

[/research] permanent link

Tue, 14 Jun 2011

Post post viva

I blogged previously about my PhD viva. I've finally got the examiners' reports, and was quite surprised by the difference between the two. Suffice it to say that one was much more positive than the other, and reassuringly for me, the more positive one was also from the examiner who is both more experienced in the role, and more familiar with my research area. (It's probably not hard for most readers to work out which one is which.)

I'm only kicking myself since I could, given this information, perhaps have steered quite a different path through the viva that would have resulted in far less extra work being demanded of me. Nevertheless, the same points in favour of doing the “corrections” that I am doing (a.k.a. self-financed development work) still stand from my last post, so I shouldn't kick myself too hard.

[/research] permanent link

Wed, 01 Jun 2011

Memtable again

I've finally got a nicely packaged implementation of memtables, the data structure I introduced in a previous blog post. It's in a single header---memtable.h. I've also fixed a couple of stupid bugs that crept into malloc_hooks.c just before I released the initial version. You can see an example of combining these two in heap_index_hooks.c---which you can compile into a simple LD_PRELOADable shared library that will instrument the glibc malloc to keep an indexed table of allocated chunks, keyed on address. It's pretty easy to search the memtable to find the heap chunk for a given pointer anywhere into an object. I'll integrate this into my Cake runtime implementation soon, and the whole lot will appear here in due course (i.e. eventually).

If you use any of these files, please drop me a line to say how you got on---I'd really appreciate it.

[/devel] permanent link

Thu, 19 May 2011

Namespace problems

It's always been a theoretical problem with C that there is no namespacing. I'd often wondered how much of a practical problem this really was, with “nothing major” my tentative answer. I've finally run into my first bona-fide gotcha arising out of this problem. In short: wxWidgets and GLib both define a GSocket.

Annoying as this is, it wouldn't be in my top ten grumbles about the experience of programming in C. A far bigger related problem is versioning. This doesn't get cited as a weakness of C because it's also a weakness of most other programming languages. The very reason I ran into the namespace problem was because I had to compile wxWidgets 2.6, rather than using the 2.8 revision that's packaged for my distribution. Version mismatches can be seen as namespace collisions too. Instead of getting the version you want, the namespace has been populated with slightly different stuff that is, despite its close relationship to what you actually require, still incompatible, much the same as if the namespace were polluted with random third-party stuff.

Versioning issues could perhaps be brought more under the programmer's control. Most programming languages don't have an explicit notion of “version” when importing stuff. But when explicitly consuming some target API, you are always assuming at least something about its version. Having the programmer declare which version of a set of declarations they want to import would be straightforward. In C, it could even be done quite neatly with just the preprocessor---say, #define __LIBFOO_REQUESTED_VERSION 4.2) before the relevant #include.

Of course, pessimistically refusing to link across nominal mismatches of version would be a bad solution. We want a more structural and, indeed, behavioural or “semantic” approach. With the C preprocessor approach I outlined, it becomes the header file author's responsibility to embed a test about which prior API version the associated implementation is compatible with, most likely using a simple #if test. This responsibility is not unreasonable I'd say---the developers are in the best place to say what has changed with a new revision. And since it's in a header file, if the maintainers are lazy, the client programmer can override it.

One shortcoming of this approach is that the client programmer might be too lazy to work out which is the earliest library version their code will work with, and will instead select whatever version they are developing with. This is safe, but prevents some valid compositions. On a different system with a slightly older version of the library, the header might conservatively conclude that it's not compatible with the client, even though it could work. Anyway, I don't worry about this too much. Lots of researchers have thought about versioning before, so there's probably some good solutions knocking around.

Back to the sockets example, it's perhaps unsurprising that the name collision occurred when linking two chunks of infrastructure code. Name collisions are most likely when abstracting the same domain, having the same natural language vocabulary---namely sockets in this case. This is much more likely to happen in infrastructure software (i.e. modelling system resources) than application level software (modelling circles or ellipses or airline reservations or health records and so on), simply because you're less likely to link multiple instances of the latter together. Whereas application-level code is at or near the top of the software dependency graph, the infrastructure stuff is lower down so more likely to get sucked into a program through dependency.

I was interested to note Nick Nethercote's recent blog entry about a problem with duplication (generally) and bloat (specifically) associated with multiple wrapper layers for system calls and other nonportable interfaces. He was talking about mmap(), but the socket abstraction is another example. I have some research proto-ideas that might help with this problem. Essentially I'm interested in recovering a more finer-grained style of interface description from code, based on the idea of “relational interfaces”. You could then use this description to infer that two sets of functions had very similar behaviour, and factor out the duplication (with appropriate refactoring or adaptation tools).

This whole problem is another consequence of our fragile direct-interfacing, in-order methods for constructing of software. If we had a more flexible way of constructing software, the problem wouldn't arise. Rather than slavishly building on predefined interfaces that are specific to one underlying component---like one mmap() abstraction layer, or one socket abstraction--- we need smarter tools for specifying our requirements abstractly and finding customised ways of satisfying them using a range of “found” code. This is what my Onward! '09 proto-paper was ranting about. I guess it's good that I'm still ranting. Interface hiding is as good an idea as ever, and more work on it will happen, when I get time....

[/devel] permanent link

Memtables

At the MMNet workshop in Glasgow last week, I talked about memtables. These are an efficient associative data structure, built using virtual memory support on modern OSes (currently implemented for Linux only), that are useful whenever you want to key a table on addresses in memory. See my slides for more.

Since entries with numerically similar keys are stored close to each other, memtables are, like certain other associative data structures, amenable to searching within a key range as well as exact-match lookups. By contrast, hash tables can't do this. (That said, a hash table supporting duplicate keys can be used to store items grouped into small equivalence classes. This is sometimes good enough, and could be made to work in my case. Nonuniform key duplication will mess up the O(1) nature of hash tables though.)

Memtables seem like they could be useful in lots of places. I invented them for DwarfPython as a fast way of storing and retrieving metadata given a key that may be an interior pointer (hence the searching requirement). I'm also (soon) using them in Cake as a fast way of tracking what objects are associated with what other objects.

The key space doesn't have to be addresses. It's possible we could even use memtables for free chunk binning, since large sizes are sparsely used. I need to do some more experiments to establish this.

The implementation comes in two parts:

  • A generic set of malloc hooks for glibc: these hooks aree “generic” in that they're designed to be easily specialised for various conceivable kinds of instrumentation. They're not generic with respect to the allocator---sadly they're specific to glibc, but most mature allocators should have some similar mechanism. The key usefulness in these hooks is factoring the various cases of the malloc API ---specifically the complex behaviour of realloc, but also other annoyances including memalign and null frees--- into an easy-to-use set of higher-level hooks. These are likely (but not guaranteed) to be a better match for whatever your instrumentation is doing. For example, defining a post_successful_alloc() function will hook all events that allocate a new heap block, whether they originated in a malloc(), a realloc() or a memalign().
  • a generic memtable library: this will appear soon! It's a set of hook definitions that maintain a memtable, and a lookup function.
  • Memtables are strictly faster than a hash table, at least for lookups, because they are basically a hash table without the hashing. At least for most applications of memtables, the table itself acts as an index for a bunch of linked lists---call them bins or buckets. Rather than mapping the key space onto a smaller hash space in order to keep the index small, we index directly by the key, and rely on the virtual memory trick to keep the index small. Since we can only save page-sized chunks of space, the key-space really needs to be used in a clustered and/or very sparse fashion. Just one used key per page of index is enough to allocate the whole table in physical memory, which we don't want. So if your table has four-byte entries, say, uniform key usage should be a lot less than one per thousand possible keys---but clusters of many are okay, so long as they're thousands apart.

    [/devel] permanent link

    Wed, 13 Apr 2011

    PhD examination

    So I passed my PhD viva a couple of weeks ago. I do, however, have a lot of corrections to do. In fact I have about the most corrections I could have, in hours of work terms, without having to resubmit my thesis. Thank God I don't have to do that. As it happens, the actual corrections to my thesis are not very many. I have to add the odd paragraph here and there, and collect a small amount of extra data. The killer is the non-thesis bit. I'll talk about that in a moment.

    There's a lot I could say to summarise my feelings about the viva. Here are the two words I've been using most when people have asked me how it went: “reasonable” and “annoying”.

    For the “reasonable” part, I have to thank my examiners, Andy Rice and Alex Wolf, who deserve credit for the depth at which they got to grips with my thesis. I was quite impressed with their attention to detail. Although I can (and will, shortly) disagree with their take on what is necessary or sufficient to substantiate my thesis, I also appreciate how my doing so is very much challenging a norm... and the examination process isn't the right place to do this. Examination is a pragmatic business, and when considered less on intellectual high ground and more in terms of personal risk and reputation, I could not reasonably have expected (at least not with high probability) their taking a different position.

    For the “annoying” part, in short, I was far too idealistic in my conception of the PhD examination process. Of course it has some room for intellectual rigour; but virtually no research in any practical field has such unthreatened validity that examination doesn't fall back on “due diligence” to some extent. Another word for “due diligence” is “hoop-jumping”, and that really sums up why I think my thesis attracted the complaints that it did: it didn't jump enough established hoops to make the examiners feel comfortable rubber-stamping it. I'm not saying my thesis is great; it's fairly weak really---but it's no weaker than a lot of other theses which seem to pass without problem. I suppose the examiners did rubber-stamp it in the end, given that I passed---but subject to corrections which, unsurprisingly, make it jump an additional hoop. I don't feel that jumping this hoop substantiates the thesis any more strongly, and this is the centre of my annoyance.

    A new rant about an old problem

    My problem is not a new one. Introducing a new language is a relatively common thing for a CS researcher to do. Assuming the claimed benefit of the language is a practical one, rather than a theoretical one, then evaluating the language is a huge problem. PhD students don't have the time or the budget to carry out large field studies. Anyway, instead of this, the usual approaches are to prove something about the language, to show that it has reasonable performance, and/or to apply it to case studies. I'm going to be bold and claim that the first two are hoop-jumping in most cases. It's a rare case indeed where a language's goal is actually to realise the theoretical property in question or to “do X really fast”. (Of course, other kinds of work, in theory and systems respectively, do have these as express goals, but I'm talking about languages here, where “language” is distinct from “calculus”.)

    It's reasonable to set for your language a performance or theoretical goals in addition to your main goal, as this can be a source of interesting problems and brings the work closer to applicability in practice or interest in theory. However, it really annoys me when people confuse these goals. I hate seeing papers which introduce some new language feature that is claimed to help programmers---the usual end goal of any language---and then evaluate it either by an irrelevant proof or irrelevant performance measurement. This has the effect of encouraging both a confusion between the main goal of a language and these side-goals, and moreover, encouraging a culture where evaluating the main goal is neglected in favour of the side-goals, or where the side goals are seen to imply the main goals.

    Trouble with case studies

    Case study evaluation is unsurprisingly the approach I chose. This might have passed muster, except that the other hoop I didn't jump through was producing a complete working implementation. This doesn't mean I didn't implement anything: I did a lot of implementation work during my PhD. But for various reasons, my reach had exceeded my grasp. I had plenty of working examples of the techniques I wrote about, but the code generation side of my compiler had got hairy enough that I decided that it should suffice to show implementability rather than implementation. I think I did this, and I don't think my examiners doubted it either, although they did mince some words on the subject. In the end, they were reluctant to accept this implementability evidence as sufficient defence of the thesis. I couldn't put my finger on why, and I wouldn't say they could, either. Instead, I only got some quite vague questions, in essentially four forms.

    The first was: “How do you know your language features are sufficient?” Of course, I don't. Since I spent a whole chapter talking about cases that aren't handled, clearly I make no such claim (although I do identify what needs fixing and how this doesn't break the key abstractions of the language). I do claim that they're sufficient for the case studies, and that since these are representative of other code, that they will be sufficient for a wider range of code. This is demonstrated by argument and careful analysis of code rather than saying “we ran it”. But saying “we ran it” is still subject to error---since realistically, how much testing did you do, and how can you be sure it was enough? The case the examiners seemed to worry most about was the one where, by failing to account for some unspecified detail, some new language feature or altered semantics would be necessary just to handle the case studies themselves, never mind other examples to which I claimed it generalised. I think I provided quite a weight of evidence that this wasn't going to happen. However, even if it did, it would still a matter of error bars, not validity.

    The second was: “How do you know you haven't forgotten something in your implementation?” Again, I don't, but I have implemented enough that the implementability isn't in doubt. Even if a fully working version would turn up one or two extra details that need addressing, this wouldn't undermine the thesis.

    A final question: “How do you know your language features are necessary?” I still find this question bizarre. The language features exist to handle common cases in a way that saves programmer effort. Every feature is illustrated with a plausibly common example, and any experienced programmer would recognise its usefulness. This doesn't mean they couldn't be eliminated, but doing so would bring a strict degradation in what the language offers the programmer.

    What didn't help was that the examiners didn't ask me these questions one at a time, but rather rotated among them with dizzying speed. It was though they themselves hadn't yet separated them in their own heads. Without this, I might have been able to fend them off better, along the above lines. As it was, I can't help feel I did well not to get too put out by it all. I nearly did lose my cool at one point where one examiner suddenly claimed that I needed to do a performance evaluation. I had very explicitly and very clearly excluded performance from any but informal consideration very early in the dissertation, precisely in order to prevent my task from blowing up even further than it already had. Fortunately I managed to argue this one down, although annoyingly, I still have to gather some (meaningless, but fairly trivial to collect) performance data for my corrections.

    The “solution”

    So, how did the examiners propose that I answer their objections? In the time-honoured hoop-jumping way: to finish the implementation, of course, so that I can say “I ran it”! Actually I only have to get it up to a certain level, rather than finishing everything, which I suppose is something to be glad about. But I had failed to complete my implementation for very good reasons: it's a ton of work, and it was already past the point where its feasibility was established. In hindsight I could have written up this fact better. But I think it was still clear that what remains is a matter of development---which I wasn't prepared to spend any more of my own money to fund, given that I'd already spent 6 months living off savings and consultancy work. Fortunately, circumstances now mean that I have a job which pays enough that by going part-time I can get it done while remaining solvent. (It also had to happen this way round, since if I hadn't been able to submit my thesis without a full implementation, I wouldn't have been able to get the job that is now, indirectly, paying for the implementation's completion.) Of course, my financial situation is an irrelevance as far examination goes, and it has to be that way. The moral is that there is no safety net, and nobody who is truly responsible for your thesis than yourself. The system is accountable to nobody, and it has no incentive for self-improvement... except maybe to the extent that (and over the timescales by which) PhD examinees who suffer negative experiences become examiners who can still remember them. “It's not fair!” as Jennifer Connolly once declaimed, “... but that's the way it is”.

    The role of empirical rigour

    At the moment, and probably since time immemorial, there is a cohort of CS researchers in the fields of programming languages and software engineering who are vociferously advocating greater empirical rigour in research. Early on in my PhD, I thought that this movement could only be bad news for little old me. I barely had the resources to produce an implementation within one PhD, never mind do an empirically rigorous user study. However, now I think that this movement is actually on my side (as well as the side of “truth” and good science, which I didn't doubt). The hoop-jumping that would have satisfied my examiners---producing a working implementation and running it---doesn't actually strengthen my thesis, and in an empirically rigorous discipline, this would be clear. In turn, it would probably be a more “done thing” to submit theses that don't tell a complete story---because telling a complete story on something complex as complex as a practical programming language, and doing so with empirical rigour, is too much work for one PhD. Perhaps it would be more acceptable to package research “towards” a goal, evidence but not yet conclusive evidence, with its outstanding threats to validity clearly explained, yet unresolved. Instead, in our empirically immature discipline, we try to brush these unresolved threats aside by arbitrary hoop-jumping.

    The downside of a more empirically rigorous discipline would of course be that each researcher can't race ahead quite so fast. Within the scope of one PhD, there is far less prospect of telling a neat, complete story. In my case, this would have been both good and bad. For empirical rigour's sake, I would have to have spent much longer on case study, including (probably) starting my thesis with an empirical study. Perhaps all implementation would have to be left for the future, and my thesis's contribution would mostly be on understanding the problem empirically, with a paper sketch of the solution validated by detailed analysis of examples. Of course, this paper sketch would have a weight of evidence behind it. The downside is that I actually like the idea of implementing stuff, and even though I haven't (yet) finished the job (and I am now working on it, again), I would have found it frustrating to embark on a PhD with no intention of completing an implementation.

    Conclusion

    This post probably sounds like a lot of sour grapes, although I hope it doesn't. It's actually a positive thing for me that circumstances have conspired to give me a chance to finish the Cake implementation, since it will be a useful springboard for future work and perhaps even (gasp) impact. Previously, when I was resigned to not finishing it, it was looking like this would become an albatross. More generally though, I can't pretend not to be a little bit sour about the course my PhD took. Despite making what were defensible and reasonable moves at each stage, the whole thing turned into a bit of a mess and has caused me a lot of pain. However, the mess of the work (which could have been better, but I think was comfortably “good enough”) is a different mess from that of the examination. I am now very strongly convinced that there really is a problem with the attitudes to evidence, rigour and the mythical “completeness” in computer science. If I last long enough in this line of work, perhaps I can help do something about it.

    [/research] permanent link

    Tue, 22 Mar 2011

    How much memory could an mmap() map...

    ...if an mmap() could map memory? I've had to ask this question of Linux on 64-bit x86 platforms recently.

    For reasons I will only hint at, I want to allocate a huge region of virtual address space for a structure a bit like a linear page table (called a “virtualized page table” on Wikipedia). We rely on a particular behaviour of Linux's mmap(): that mapping some memory isn't the same as committing to the availability of any underlying physical memory. Passing the MAP_NORESERVE flag means that memory will only be allocated when written to, hence allowing us to create a sparse table easily.

    I decided my table should have one word per 4KB of memory. For a 32-bit machine, which has 4-byte words and 220 such (aligned) regions, this means I need 4MB of virtual address space for my table (i.e. about a thousandth of the VAS). If we ask mmap() for such a region, it will clearly oblige us. On a 64-bit machine, which has 8-byte words and 252 such regions, I need 255 bytes of virtual address space for my table---32 petabytes, or about eight billion times as much as in the 32-bit case, but again, only a small fraction of the total address space (in this case a 512th, because words are twice as big).

    Here's a quick program you can run to test whether you can do an mmap() of a given size.

    #include <stdio.h>
    #include <errno.h>
    #include <string.h>
    #include <stdlib.h>
    #include <sys/mman.h>
    #include <assert.h>
    
    int main(int argc, char **argv)
    {
        assert(argc > 1);
        size_t mapping_size = strtol(argv[1], NULL, 0);
        assert(errno != ERANGE);
        assert(mapping_size > 0);
        assert(sizeof(size_t) == 8);
            
        void *ret = mmap(NULL, mapping_size, PROT_READ|PROT_WRITE, 
            MAP_PRIVATE|MAP_ANONYMOUS|MAP_NORESERVE, -1, 0);
    
        if (ret == MAP_FAILED)
        {
            fprintf(stderr, "error: %s\n", strerror(errno));
            return 1;
        }
        else 
        {
            fprintf(stderr, "success!\n");
            return 0;
        }
    }
    

    And here's a shell script to drive it with powers of two until it fails.

    #!/bin/bash
    
    for exp in `seq 10 50`; do
        ./test $(( 2 ** $exp )) || break;
        echo "Succeeded for 2 ^ $exp"
    done
    

    I'd be curious to know whether anyone on an x86-64 Linux machine maxes out anywhere different than 246 bytes. The kernel source will have the answer, but I can't be bothered wading through it right now. Interestingly, turning off the overcommit check (i.e. writing "1" to /proc/sys/vm/overcommit_memory) doesn't increase the limit for me.

    By the way, I'm using strtol because atol seemed to be narrowing the result to 32 bits even though a long is 64 bits. Instead of 231 I got -231, which unsurprisingly made mmap() fail. This seems like a bug, but probably isn't for some reason (possibly including a stupid mistake by me).

    As you might have guessed, I'm using this huge region of memory as a big flat structure to record metadata about memory. The main trick of a linear page table is that we can use virtual memory to encode large sparse arrays, without allocating memory for page-sized regions of the table that are empty. This generalises to sparse tables other than page tables. The one I'm building is for tracking allocated heap regions.

    [Update, 2011-3-23: thanks to Malcolm Scott who pointed out that my problem might be more tractable, because current x86-64 processors only implement a 48-bit address space. This also means that the 46-bit limit makes more sense---my mmap() successfully allocated a quarter of the usable virtual address space! Now I'm wondering: are those 48 bits something I can rely on for the nominal x86-64 architecture, or will running the same binaries on future hardware silently issue addresses from the larger 64-bit space? For now it doesn't really matter, but send answers if you have them (on a postcard, if you like) please.]

    [/devel] permanent link

    Thu, 17 Mar 2011

    Everything is a somehow-interpreted file

    My last post was ranting about the difficulty of maintaining and debugging configuration of complex software. I somewhat naively advocated the position of using the filesystem to the greatest degree possible: as the structuring medium as well as a container of data. This is good because it maximises the range of existing familiar tools that can be used to manage configuration. But at least in some cases, the filesystem---as an implementation, although sometimes as an interface too---is manifestly not good enough (e.g. in terms of space-efficiency for small files). Instead, we want to make our old tools capable of accessing, through the filesystem interface they already know and love, these diverse and wonderful implementations of structured data.

    I have heard many researchers claim that binary encodings of data are bad, that XML is good, or even that revision control systems are bad ways of storing code, for the same recurring semi-reason that good things are things that conventional tools work with---sometimes the phrase “human-readable” crops up instead---and bad things are things they don't work with. You can search XML or plain source code using grep, or so the reasoning goes; you can't generally do the same for binary or diffed-text or otherwise munged encodings. This argument is classic tail-wagging-dog material. Plain text is only “human-readable” because there is some widely deployed software that knows how to decode binary data representing characters (in ASCII or UTF-8 or some other popular encoding) into glyphs that can be displayed graphically on a screen or on a terminal. If other encodings of data do not have this property, it's foremost a problem with our software and not with the encoding.

    Unix is arguably to blame, as it is obsessed with byte-streams and the bizarre claim that byte streams are “universal”. The myriad encodings that a byte-stream might model are less often mentioned. I'd argue that a major mistake of Unix is a failure to build in any descriptive channel in its model of communication and storage. Without this, we're left with what I call the “zgrep problem”: each file encoding requires its own tool, and the system offers no help in matching files with tools. If I have a mixture of gzipped and uncompressed text files in a directory---like /usr/share/doc on any Debian system---recursively searching is a pain because for some files we need a different grep than for others. Some approach combining analogues of HTTP's Content-Type and Accept-Encoding with make's inference of multi-hop file transformations, could allow the operating system to transparently construct the correct input filters for searching this mixture of files from a single toplevel instance of our plain old grep tool.

    For this to work we need an actually sane way of describing data encodings. (Call them “file formats” if you will, although I prefer the more generic term.) Later attempts at describing the structure of byte-streams, notably file extensions or MIME types, have been ill-conceived and half-baked, mainly because they are based on a simplistic model where a file has “a type” and that's that. So we get the bizarre outcome that running file on /dev/sda1 says “block special” whereas taking an image of the same disk partition and running file on that will say something like “ext3 filesystem data”. Doing file -s might be the immediate workaround, but in reality, any given filesystem object has many different interpretations, any of which may be useful in different contexts. A zip file, say, is a bunch of bytes; it's also a directory structure; it might also be a software package, or a photo album, or an album of music.

    Another interesting observation is that these encodings layer on top of one another: describing a way of encoding an album of music as a directory structure needn't target a particular encoding of that structure, but presumes some encoding is used---any that conceptually models the album encoding is sufficient. So we want to capture them in a mixin-like form, and have some tool support for deriving different compositions of them. What would be really neat is if a tool like file, instead of doing relatively dumb inode-type and magic-number analyses, actually did a search for encodings that the file (or files, or directory) could satisfy. Each encoding is a compositional construction, so the search is through a potentially infinite domain---but one that can usually be pruned to something small and finite by the constraints provided by the file data. But by giving it more mixins to play with, as configuration data, it could find more valid interpretations of our data. This sort of discovery process would solve the zgrep problem in far more complex cases than the one I described. Implementing (and, er, somehow evaluating) these ideas might make a nice student project at some point.

    These ideas have been with me right from my earliest forays into research-like work. I've yet to really nail them though. My bachelor's dissertation was about a filesystem exposing arbitrary in-memory data. And my PhD work addressed some of the problems in making software accessible through interfaces other than the ones it exposes as written. I've yet to do anything with persistent data or other IPC channels so far, but it has been on the list for a long time. (It's become clear with hindsight that this agenda is a characteristically postmodern one: it's one of building tools and systems that provide room for multiple interpretations of artifacts, that don't insist on a grand and neatly-fitting system being constructed in one go, and that accommodate incomplete, open or partially described artifacts.)

    The power-management problem that provoked my previous post actually gets worse, because even when not on AC power, closing the lid only usually makes the laptop suspend. Sometimes it just stays on, with the lid closed, until the battery runs down. For my next trick, I may be forced to come up with some bug-finding approaches that apply to scripts, filesystems and the like. If we're allowed to assert that a file has a particular abstract structure, and check that separately, then we can probably factor out large chunks of state space from real software. In turn, that might shift our focus away from “inputs of death”, fuzz-testing and other crash-focussed bug-finding techniques, and towards the harder but more interesting ground of finding functionality bugs. I'll rant about that in a forthcoming post.

    [Update, 2011-3-18: I just ran into an interesting example of this same debate, where Ted Ts'o is actively advocating using a database in preference to maintaining lots of small files, for performance and robustness reasons, in relation to ext4's semantic differences from ext3. Understandably, he provokes a lot of discussion, notably here, where people complain that binary files are difficult to maintain. There is a mix of views and a wide variance of clue level, but the real issue is that no one interface---where “interface” includes robustness and performance characteristics--- is the globally “right” one.]

    [/research] permanent link

    Wed, 16 Mar 2011

    Config filesystems, not config files

    Like most computer scientists, I really hate tinkering with computers. Actually, that's not true. Like many computer scientists, I like tinkering with computers in a constructive way that achieves something interesting and novel (read: practical CS research). But I hate tinkering that is provoked by stuff not working. I use a lot of software that has episodes of the latter kind---most of it is free software (and, erm, arguably most free software is of that kind, but that's another story).

    One recurring pain is that I learn how to configure software the way I like it, but then all that hard-learnt knowledge is eroded by change: change in what software is “current” or “supported”, and also change in the way any given bit of software manages its configuration. If you like, software's semantics often change, particularly at the configuration level.

    So often, I'm faced with a bunch of hassle just to keep my configuration working the way I like it. Recent examples have included: KDE 4 clobbering my X resources when it starts up (in a way that KDE 3 didn't); Xorg forcing me now to use xrandr to set up multiple monitors; wireless networks now being preferentially handled using Network Manager not ifupdown.

    In dealing with this complexity, one recurring principle has been clear to me. The closer a configuration system stays to the Unix filesystem abstraction, and the less abstraction it tries to re-build on top of the filesystem, the easier it is to deal with. This is because using a direct filesystem encoding, I can use standard tools, from a shell, to inspect and search and modify (and even generate) my configuration, and to debug problems. This is also why gconf sucks, just as the Windows registry sucks: they represent hierarchical data using a custom encoding layered on flat files, rather than embracing the hierarchy already given to them by the filesystem. (This approach is even less excusable on Unix than on Windows, because Unix filesystems are somewhat optimised for small files in a way that Windows-land filesystems traditionally weren't, as exemplified by FAT.)

    In some quarters there's been a drive to actively embrace the filesystem as a means of capturing the structure of configuration data. Configuration directories (like xorg.conf.d) are one example, although it has now gone away; symlink structures like the traditional System V init runlevel directories are another; the run-parts idea of directories encoding control structures is a third. Configuration is easy to understand and modify when it's laid out transparently in the filesystem. When it's instead recorded as opaque data in random files somewhere, this is not true.

    Unfortunately this drive towards transparency is not universal. Today I've been debugging a configuration issue with power management on the recent Ubuntu. When I close my laptop lid with no AC power, it suspends to RAM. When I close the lid on AC power, it doesn't---but I want it to. I had assumed the matter was under the control of the scripts in /etc/acpi/, but a quick inspection of the lid.sh script revealed that it didn't deal with suspending to RAM at all. It turns out that KDE 4 has something called “PowerDevil” and that this is responsible. I can configure it using KDE's graphical systemsettings application. But this whole set-up is unsatisfactory. How does it interact with other system settings, such as the /etc/acpi/ scripts? Why is a KDE-specific tool replicating the functionality that is already provided at a lower level? And now I have one more chunk of configuration to deal with, at one more path in the filesystem, and one more model of the settings domain to understand---squirreled away inside its own configuration file (mercifully in plain-text format).

    Now, the researcher will say that there's a problem here: why should a simple need, such as the gconf-like desire to implement a familiar abstraction (or something close to it) with different performance characteristics, bring such a huge cost in terms of tool support, convenience and integration cost? It's not really an answer, as I have proposed, to “just use the filesystem, stupid”. For the same reason, even filesystem-embracing approaches don't go so far as to have one file per setting, say---there is some amount of filesystem-opaque flat structure. I'll save some comments on this for a (near)-future post.

    [/research] permanent link

    Mon, 14 Mar 2011

    Writing a Part II dissertation

    Not that long after I did Part II of the Tripos in Cambridge, I wrote a piece about how to do a successful Part II project, based on bitter experience of having done a less-than-completely successful one. Now it's a few years later, and I've supervised four Part II dissertations and proof-read a couple more (and now left Cambridge!). I thought I'd write down some hints about what makes a good dissertation. As before, the pink book is well-meaning but misses a lot of this.

    I should also add that this advice is not prescriptive---it's a mixture of my personal preferences as a reader of dissertations, and a conservative underapproximation of “known ways to write a good dissertation” according to my experience. Not being part of the examination process myself, I can't say it's the only way, nor that it's sufficient, nor even that it's always correct---only that these recommendations seem to have some correlation with good marks.

    Introduction

    This chapter is for you to identify the problem space of the work, and what specifically your work seeks to achieve in that space. You should outline the technical background of the work here, but only at a high level---save details for next section. The emphasis should be on context: motivating use-cases, and covering a selection of differing alternative approaches to the problem. Once you've covered that, you can give a specific taster of the one you adopted---probably in the form of a simplified summary, perhaps a preview-style example or two (of the work you did and its supported use-cases) and an outline of the rest of the dissertation.

    Preparation

    This chapter should consist mostly of technical background, at a level deeper than the introduction. Think of it as mirroring the preparatory reading (and perhaps experimentation) you had to do before beginning the project work proper. You should describe the specification of the system you built, outline its overall design, and mention any changes to proposal. It's particularly worth emphasising anything hard that you had to learn in order to undertake the project---such as unfamiliar tools, languages, libraries, programming styles, algorithms, bits of mathematics, and so on.

    For pitching your work, my rule of thumb is to assume your reader knows basic computer science (say, an outline understanding of the first two years' Tripos content) but is a bit slow to catch on. In other words, try to make the reasoning in your write-up extremely clear and easy to follow. This is not because the examiners are slow to catch on. Rather, it's because they are probably in a hurry, and in order to counterbalance the fact that you're almost certain to underestimate the difficulty of understanding your own write-up.

    Avoid assuming that the reader knows any particular language, tool or detailed technique that you use. For example, if your work is based on implementing some particular programming language, you can assume the reader has basic degree-level knowledge about programming languages in general, but you should use this chapter to give an outline introduction to this specific language and its peculiarities, with references to additional material.

    This chapter is somewhat infamous for requiring that you write some “boring” stuff about development practices: what tools you used, your testing strategy, how you took backups, what development practices you followed, and so on. I say “infamous” because we researchers, when we are proof-reading dissertations, find this all a bit incongruous---most of a dissertation reads a bit like a research write-up, whereas this stuff doesn't. Since we're not accustomed to seeing this kind of thing written down, it thuds a bit. However, it's the researcher's perspective that's at fault here. Most dissertation authors are about to go off into industry, and from the perspective of professional software development, including this small amount of practice- and process-oriented content is a very reasonable requirement.

    Implementation

    To a first approximation, this chapter should be describing the clever data structures and algorithms that appeared in your project. Think “meaty”---a really good Implementation chapter will come across as chock-full of interesting, nontrivial technical material, clearly described and comprehensible. It should also have reasoned discussion of how your techniques work and why they are well-suited to the problem.

    When explaining how your system works, there is a question of what level of explanation is appropriate, and how much detail you should go into. I'd advise that you completely avoid descending to syntactic code-level details (e.g. names of particular classes and methods). There could be an exception if some class or method was so conceptually significant and pervasive that mentioning it by name is the clearest presentation, but this is relatively rare. Think of this chapter as providing descriptions sufficient for a reader to go away and reimplement your system algorithm-for-algorithm, but not identifier-for-identifier. Including a UML class diagram, or similar, is okay---particularly when it's showing something interesting!---but I'd advise against going any deeper than that.

    Instead of code-level detail, explain your implementation using pseudocode, data-structure diagrams, flow-charts, time-sequence diagrams, and the like. A good project will have meat here that is outside the syllabus, or at least drawn from the more specialised end of Part II courses, so you should make these especially clear and assume your reader is unfamiliar.

    You may well have used some third-party software as part of your system. You should explain how it fits in. Moreover, it's okay (and generally good) to include a summary of how it works, even though you didn't write it. This is to show that you could have implemented it yourself if you'd had to, but that there was no good reason to!

    I would personally avoid talking about evaluation harnesses, methods or support coding here. That is best saved for the Evaluation chapter.

    Look at existing technical writing, perhaps research papers in reading lists of your lecture courses, for more ideas about how to present technical content clearly and at the right level.

    Evaluation

    Couch your evaluation in terms of claims that your work supports, or goals that you claim your work has fulfilled. The chapter then becomes an exercise in substantiating these claims.

    Explain your method of evaluation. This requires answering at least three questions. What data are you collecting? How will you be interpreting it? Finally, how are your claims decided by this interpretation of your data? That final question is the trickiest to get right---usually it involves an argument that the interpretation of the data soundly represents the abstract truth of the claim in some way. (However, you don't need to belabour the point on that last issue---as long as your approach is not completely off-the-wall, it's generally enough to show that show that you've thought about the validity of your evaluation method and that it's not obviously fatally flawed.)

    There should be results, or data. Since these will hopefully be quantitative, they are usually presented as tables and/or graphs. Pay attention to precise presentation of results: units, appropriate precision, error bars, discrete vs continuous graph conventions, etc.. For each piece of evidence you present, say why you're interested in it (first!) and (then) what it shows and your confidence in that.

    Present results so that they can be reproduced. So if you give program execution times, say what the machine you were running on was. If you can abstract this to something of more robust meaning than numbers relevant only to your particular experimental rig, then do so, although unless the mapping is absolutely trivial, include the raw data too. One example: if you measure execution times of some low-level code in CPU microseconds, say, then it might be worth abstracting that to CPU cycles if you have the relevant data---but keep the original measurement in there too. More importantly, explain what you are doing! One of the easiest ways to write a bad Evaluation chapter about good work is not to explain the origin or significance of the numbers or graphs you're presenting.

    Have something to compare your system against. If this really doesn't seem feasible, you can sometimes get around this by defining your own goals up-front (e.g. “at least 70% precision...”--- including some argued justification for the number!), then showing that you satisfy them. In this case, the goal should be mentioned in the Introduction or Preparation sections. It is nevertheless nicer to show that your system does better than some other system or approach, even if it's a somewhat naive or trivial one. For example, comparing a query-routing algorithm against a random algorithm is better than no comparison at all.

    Conclusions

    Say what you did, in summary. It's okay to repeat yourself from earlier sections. Also, take the opportunity to talk about future work: what you learned, what could be done better, how your system could be extended to do new and wonderful things, or what better ideas you've had since about how to tackle your chosen problem and its close relatives.

    Bibliography

    The bibliography should consist of references to archival material. There are some things you'll want to “reference” that are not archival material. For example, you will want to reference software packages you've used. I personally avoid trying to “cite” these in the bibliography, for example, although some authors do this. So that point, and much of the following, is mostly personal advice.

    Since URLs are not (under current web technology) stable enough for archival use, avoid URLs except as a last resort, when they're the only way of identifying a published document. Unlike paper publications, web pages are living documents, so if you use a URL, include the date on which you looked at the content (e.g. “retrieved on 2010-07-30”) so that a reader can (hopefully) use an archiving search engine to see the precise page that you saw.

    I prefer to use footnotes for referencing software by URL. (But generally use footnotes sparingly---only for places where they'd really interrupt the main flow of text, but are nevertheless relevant and interesting.)

    This is very personal advice, but I'd say: in the main text, avoid using citation markers as nouns. A lot of writers do this, but I find it jarringly bad style. Instead, use the author's name. For example, don't write “as shown by [42]”, but instead write “as shown by Smith et al [42]”. If you're using author--year citation markers (which I generally prefer to plain numbers), there are usually different citation styles you can use to avoid repeating the author's name, for example “as shown by Smith et al [1990]” is preferable to “as shown by Smith et al [Smith 1990]”.

    Style

    Following are a few other pieces of advice on style that I've had cause to dispense on the past.

    Don't feel the need to tell a complete, linear story. It's very tempting to write what you did as a narrative. However, this usually has two problems. Firstly, by emphasising the what, it's easy to understate the why, which can make the reader baffled by a succession of sentences saying “I did this, then I did this, then I did this...” but without much insight about the overall goal or the motivation behind each step. Secondly, some of the twists and turns that you took are probably not interesting at all---no matter how much of a struggle they were for you at the time---so can be left out. So, instead of a narrative, try to write instead in what I call a “progressive encoding” or “iterative deepening” style. Think about the most significant high-level points you want to make. As a rule of thumb, each of these should be stated concisely at the beginning of its own paragraph. The rest of the paragraph can fill in additional detail and back-story in successive sentences. This results in a structure where the fact and the explanation occur close together. Curiously, they usually turn up in reverse order (fact first, then explanation) and this is still very readable. It's a very common style in good scientific writing. It has the rhetorical benefit that you keep the reader interested. Making a contentious or surprising statement, then delving progressively deeper into a justification of it, is a trick that can keep the reader engaged when used judiciously.

    As an example, instead of saying “I tried running the JVM with the default options, but this suffered from out-of-memory errors, so ran with an increased heap size of 384MB“, say “the JVM was configured with the larger heap size of 384MB. The smaller default heap size was discovered to be insufficient when constructing the intermediate representation for larger input files.” Again, you can hoist more significant facts sooner, and leave more peripheral matters until later in the paragraph. Note that there will still be a temporal progression to some extent, particularly between paragraphs. Note also that by this approach you will naturally end up leaving out the redundant twists and turns that are not interesting. If you think something interesting has been left out unfairly by this structuring, you can always put it in a paragraph towards the end of a section or chapter, clearly signposted as a curiosity rather than part of the main flow. (Sometimes, a paragraph will end with a bracketed note like this one, to further outline the distinction between more- and less-important content.)

    Avoid editorialising, by which I mean expressing opinions (e.g. about particular designs, technologies, approaches, companies, products etc.) that you don't intend to scientifically validate.

    Underneath any heading in the dissertation, it's good style to include at least a summary sentence before descending down a level of heading structure (e.g. between heading and subheading).

    Finally, keep your writing grammatical. Every sentence should contain a verb!

    That's all I have to offer---please do let me know any suggestions, queries, corrections, counterexamples or contrary experience you may have about anything in this post.

    [Update, 2011-10-13: Andrew Moore supplies this very helpful set of notes from his duties as examiner in 2011!]

    [/teaching] permanent link

    Wed, 09 Mar 2011

    Program specialization (is not just partial evaluation)

    I've been thinking a lot about various techniques in program analysis, transformation and verification recently. There's certainly a lot to think about.

    One idea I'm exploring is looking at verification problems as program specialization exercises. There is a recurring two-stage process in verification. First, transform your program so that a single execution captures all possible inputs. For an explicit-state model checker like CMC, we do this by putting our program in a harness that systematically explores its state space. Alternatively, for approaches based on predicate abstraction, we replace all input-dependent transitions in the program with nondeterministic choice. The effect is the same: we now have one program encoding all possible behaviours. The second step is then to specialize our program for answering the question we care about, such as “does this assertion ever fail?”. We rely on this specialization to give us a new, simpler, faster program that we can exhaustively check, or can check to greater depth, without exhausting resources (time or memory).

    It's the specialization step I'm thinking about right now. How much of our program's computation can we throw away, while still computing the answer to our question? CEGAR approaches work from the bottom up: we start from a trivial abstraction and refine it compute something close to the smallest program which finds either an absence of bugs or at least one non-spurious bug. This process need not terminate; I'm not yet clear on its other failure modes, but am fairly sure there are some. Meanwhile, a top-down approach also exists. CMC is a useful tool even though it doesn't do any specialization of the computation per se. (It does support some other kinds of abstraction for reducing the state space by defining equivalences, which have a similar effect but are of limited applicability.) To improve on this, we could exploit the fact that throwing away unwanted computation is something we know something about. Compilers have been doing this since compilers began. “Program specialization” is a term used mainly by compiler-minded people rather than verification people. Can we apply ideas from one world to the other?

    “Program specialization” in the literature is often used to mean partial evaluation. With partial evaluation, we take a program of n inputs, say, and then produce a smaller, simpler, faster version where some of these inputs are replaced by fixed values. This is typical of optimisation problems, where “faster” is the key requirement, and the input constraints have usually been derived from some other analysis. However, there is a converse case of program specialization which the same literature often ignores. This is where we take a program of n outputs, and then produce a smaller, simpler, faster version where we “don't care” about some of these outputs. This is typical of verification problems, where “simpler” is the key requirement, and the selection of don't-care outputs is a consequence of the specification being considered.

    Predicate abstraction is doing this, but with some added room for manoeuvre---since it's open to finding sound approximations rather than precise specializations---and also with some added constraints, since it's interested in predicates that can be input to a SAT or SMT solver to perform the abstraction-refinement. Dave provided a valuable link in a productive coffee break this morning, by noting that program slicing is also an instance of specializing for don't-care outputs. What happens if we use slicing techniques to do a top-down specialization? I'm worried the answer is “not enough” or “strictly worse than abstraction-refinement”, but I'll keep thinking about it.

    [/research] permanent link

    Greek talk

    One of the reasons why I'm not a theoretical computer scientist is that I am very, very averse to mathematical notation. “It's like Greek to me!”---no applause, please. Certainly, it's common to see very highly abbreviated notation that takes some serious cognitive gear-turning to decode. If I'm faced with a Greek-heavy paper, I usually skim over the symbolic stuff and look for an explanation in words. Sometimes it's there, and sometimes it isn't. In the cases where it's not, I rarely have the stamina to wade through the Greek.

    Natural language, for all its imprecision, is---unsurprisingly---more “natural”! In fact, I'll wager that most of the infamous imprecision found in natural language specifications could be fixed by more precise natural language. Perhaps a semantic checker for English is in order. Diagrams are even better than natural language, of course, although they rarely stand alone.

    It strikes me that formalism is primarily useful for avoiding mistakes. By turning complex reasoning into simple pattern-recognition and symbol-pushing, correctness can be checked fairly dumbly. The cost is that although it's hard to make mistakes, it's hard to make progress: there are reams of applicable rules, and expressing anything complex requires a whole lot of symbols. So I'm going to go out on a limb and claim that formalism is notably not very good for acquiring understanding. In a lecture, diagrams and examples and words have always been far more useful to me than slides full of Greek. I'm also going to assert (without proof!) that formalism is not useful for artifact construction, except where mistake-avoidance is paramount. We should allow programmers to make imprecise statements, and refine them later, because humans can be a lot more productive this way. In particular, we can make progress before we fully understand the problem! Only when the cost of the smallest mistake is so great that we really want to rein things in should we resort to fully rigorous constructive methods (such as formal refinement processes, the B method, etc.). This argument also encompasses many of the usual arguments in favour of dynamic languages over statically typed ones.

    Of course, that doesn't mean that any formal notation is to be avoided. For whatever quirk of evolution, humans have some aptitude for written language---and that includes more mathematical-style symbolic notations just as well as plain old words made of letters. So mathematical notation is fine if it stays within a particular comfort zone. I can read basic logic and basic algebra without much cognitive burden. Only when the formal notation passes a certain density threshold do I suddenly hit problems. I suspect that most theoretical computer scientists (and mathematicians) have a much higher threshold than I do.

    [/research] permanent link

    Thu, 03 Mar 2011

    The end-to-end razor

    Most of us have heard of Occam's razor. Usually it is invoked as the principle that given two plausible theories, explanations or solutions to a problem, we should prefer to believe the simpler one.

    I've always been a fan of “Einstein's razor”, which paraphrases a longer quotation of Einstein by the snappy dictum “Everything should be made as simple as possible, but no simpler.”. The appeal is in its counterbalancing: there is value in simplicity, but there is harm in oversimplification.

    A third razor-like object occurs more often in system design. Most practical CS researchers will have read the “End-to-end arguments” paper. Usually, the end-to-end arguments are dumbly invoked to criticise any design which pushes a feature into the lower layers of a complex system (notably the Internet) when it could be implemented higher up. This interpretation is unfortunate. For one, it overlooks at least two subtleties expounded in the original paper: that a key criterion is whether the feature can be implemented completely and correctly at the lower layer, and also whether doing so brings any compulsory overheads (detrimental to applications not requiring the feature). But more importantly, it omits a vital counterbalancing concern: by implementing features higher up, we nearly always end up with not one but many variants of the same feature. Agreeing on which one to use is a hopeless problem of distributed (human) consensus, so we end up with a huge mess of interoperability problems brought on by this unnecessary diversity. So in fact, there are very real incentives for implementing functionality at the lowest sensible layer. The traditional end-to-end arguments don't bring these incentives out.

    In fact we should have been paying more attention to Occam all along, because his original statement that entia non sunt multiplicanda praeter necessitatem---“entities must not be multiplied beyond necessity”---is extremely suggestive of the cost of unnecessary diversity. Combining this razor and Einstein's, I prefer a different formulation of the end-to-end arguments, which I hereby name the “end-to-end-razor” (with apologies to anyone who's used that name previously to mean something else). “Everything should be implemented at the lowest sensible layer, but no lower.” You can argue about what's “sensible”, but the criteria are the same as in the original end-to-end arguments. The difference is that the counterbalancing of considerations is explicit: there may be both value and harm in building at lower levels.

    Personally, as a programming researcher, I relish the challenge of working at lower levels. Solving a general problem by building a system which is tied to one programming language, for example, seems unsatisfying to me. Not only did the decision to make Cake target object code mean that it provides a somwhat language-independent solution to its problem, but, for me at least, it was just a lot more fun hacking around the OS, linker and C library than it would have been tinkering with a JVM or munging source code. I'm not entirely sure why....

    [/research] permanent link

    Mon, 28 Feb 2011

    Why I am not (yet) a functional programming enthusiast -- part 1

    I suffer from a particular disposition which, for a programming languages researcher is quite an unfortunate one. When I hear my fellow researchers expounding the virtues of functional programming, I start to feel grumbly. Functional programming is really neat in a lot of ways. But there are some in which I find it unpalatable. Here is my first selection of complaints. They are mostly to do with the generally poor comprehensibility of functional code. I have more complaints in reserve, which will follow in due course when I'm feeling sufficiently grumpy.

    I concede that these are all elements of style, not language features per se. It's possible to write clean functional code which doesn't suffer from any of the problems I've mentioned. This hints at the fact that part of my problem is the culture among functional programmers, rather than the technology itself. That's still a showstopper though, because in practice we are reliant on other programmers. Without other programmers' having written tools and libraries that relieve us from writing everything from scratch, and documentation to explain them to us, then no programming language is of practical use. Accordingly, I'll be back later with a more concrete example where this went wrong for me in an earlier foray into functional programming.

    [/research] permanent link

    Tue, 22 Feb 2011

    Explaining covariance and contravariance by data flow

    A few years ago when I was pressed by my students to give them a good explanation of Java's wildcard types, like Stack<? super Circle> or List<? extends Ellipse>, I came up with a simple code example concerning the direction of data flow. The idea seems both to be more general than I first thought, and to apply to more wide-ranging problems with subtyping relations than the case of Java wildcards. Since I've never seen it written down anywhere else, here it is.

    In a nutshell, the idea is that direction of data flow is key. (If that was obvious to you already, you can probably stop reading now!) In object-oriented programming, all interactions across object interfaces can be seen as sending messages. In a procedural language like Java, each interaction will in general involve two messages---the call and the return value. In a parametrically polymorphic type system like modern Java's, one of the key roles of the type system is to check (conservatively, as always) that any invocation will not try to send the wrong kind of data, in either direction.

    Suppose I have some code that works by pulling objects out of a queue. To work correctly, these objects have to be of class Circle or some subclass thereof. So, we need a queue whose designated element type is Circle or some subtype of Circle, i.e. ? extends Circle. We say that the type of the queue here is covariant in its element type, meaning the subtyping relationships go “the same way”.

    Meanwhile, suppose the same code, after it processes the Circle, wants to push the object onto another queue for some other code to deal with. We want to ensure that this queue can accept the objects we give it, which might be Circles or instances of any subclass. So we need a guarantee that it can take any such objects. So, a queue of UnitCircle, say, would not be good enough---if we gave it a plain Circle this would be a type error. So our output queue has to have element type Circle or any supertype thereof, that is, element type ? super Circle. Here the type of the queue is contravariant in the element type, meaning the subtyping relationships go opposite ways.

    I won't claim that this explanation covers all sane use-cases of wildcard types. On the other hand, if you know of any that aren't in some way similar, I'd like to hear about them. The idea is certainly more general than just code which processes items in queues, or just code that pushes data from one data structure to another.

    This is also a neat way of considering a classic problem in typed object-oriented programming, called the “Circle--Ellipse problem”. The question is simple. We wish to define two mutable data types: Circle for representing circles and Ellipse for representing ellipses. Which way, if any, should the subtyping relation be between Circle and Ellipse?

    The trick is to remember that a mutable object is not just a value: it is a container for a value that can be retrieved and replaced. In other words, we can put data in and get data out. The presence of these two distinct directions is the source of the problem. Put simply: when used in the “out” direction, for reading the parameters of the circle, Circle is a subtype of Ellipse (every valuation of a circle also represents an ellipse); but in the “in” direction, for updating the parameters stored in the object, Ellipse is a subtype of Circle, since we can give an ellipse any set of parameters that we might give a circle, whereas the converse is not true--circles cannot be updated with the parameters of ellipses having unequal minor and major radii.

    An even simpler case which illustrates the in/out distinction is the usual subtyping rule for functions, which I will illustrate formal-style just because I can:

           s1 <= t1        t2 <= s2
        ------------------------------
              t1->t2  <=  s1->s2 

    In English: a function expression's type t1->t2 is a subtype of another function expression's type s1->s2 if t2 (denoting the set of result values) is a subtype of its counterpart s2, and the type t1 (denoting the set of argument values) is a supertype of its counterpart s1. Intuitively (or Liskovly), this holds because to be a subtype, any term typed t1->t2 must be substitutable into any context where a term of type s1->s2 could be used, meaning the full range of argument values denoted by s1 must be acceptable as the call's content (and, optionally, other values may also be permitted), and the full range of return values denoted by t2 must be acceptable to the call context (and, optionally, (and possibly other argument values could be allowed also). I said “if” but turning the rule upside down is also valid, so “iff” would have been correct (but a stronger statement than the inference rule itself).

    I'd like to add that this is the first time I've written down an inference rule in any of my writing.

    None of this says anything about how Circle and Ellipse should be implemented. Abstractly, they share a lot, so in any sane implementation we would want to share whatever code we could. In many object-oriented programming languages, inheritance is provided as a code-sharing mechanism. Which way should the inheritance relationship go? It doesn't matter, as long we treat this as separate from the subtyping relationship.

    Here is the real problem. Some languages, particularly Java, make this difficult for us by assuming that these two relationships should be the same. Most people who know a little about object-oriented programming will know that inheritance and subtyping are distinct concepts, although are often conflated. Historically, C++ can probably be blamed for this conflation, in that it popularised the overloading of a single mechanism (class derivation) for both. However, in C++'s defence, the fact that class inheritance is private by default is this way precisely because of this distinction. Sadly, most texts which teach the language use public inheritance nearly everywhere without ever explaining why this can be a bad idea. Java made public inheritance the default, so is far more culpable in my opinion.

    Many people take this as an opportunity to lambast implementation inheritance. I won't do this. Inheritance is a useful language feature for capturing commonality of implementation. It's great that I can write a class C which shares a lot of implementation with a pre-existing class B, simply by describing what is different. It's very handy that there is no need to write forwarding functions to the code in B that can be used directly in C. It makes for readable code and discourages copy-and-paste code cloning.

    Subtyping is more subtle, because it is a semantic property. It relates to the meaning of code, not the raw code itself. Things like direction of data flow don't turn up in the more syntactic matter of code re-use (inheritance's territory) but, as we saw, very much do in the semantic world of types.

    I said I've never seen this explanation written down by anyone else. but actually, a bit of Googling reveals another blogger has covered similar ground. You can decide whose explanation you like best!

    [/teaching] permanent link

    Sun, 14 Nov 2010

    Completeness

    As researchers, we are naturally wary of addressing problems which don't have a well-defined end point. We like to be able to say, “we've solved it”, or perhaps “we've solved it for the following formally-characterized subclass of problem”. In my work, that doesn't really make sense. Cake is a practical tool for tackling interface diversity. No such tool can be complete, because for any nontrivial language, or interface encoding scheme, there is no limit to the crazy encodings that could be dreamed up. When Kent enumerated “the many forms of a single fact”, he wasn't making a complete list, which would clearly be impossible---but he was trying to come up with a catalogue that included a reasonably high proportion of what people actually do in practice.

    Although I could try to formally characterise the class of mismatch which my current Cake language is capable of reconciling, I've never seen much worth in doing so. It's hard, if not impossible, to come up with a formal characterisation that actually has practical meaning. In any such exercise, what tends to happen is that a bunch of practically meaningful cases are excluded, and a bunch of practically meaningless cases are included. One reason for this is that humans brains are messy, and don't respect the formal boundaries induced by conventional mathematical thinking. Mathematical formalisms strive to tackle complexity primarily by purposely engineered compositionality; human evolution has done so by a random walk. The point is that humans are full of specialisations, resource limitations and arbitrary cut-offs. Too often, we embrace the formal concerns and ignore the human reality. This is the formalism wagging the research, and it's something that irritates me.

    One of my favourite examples of formalism-versus-reality is our use of nesting in sentences, and in particular, what linguists call “centre-embedded” or “self-embedding” structures. (The concepts are distinct, but the distinction is somewhat confused in the literature.) Sentences such as “The rat the cat the dog chased ate died.” are perfectly grammatical according to any formal grammar of English you might reasonably come up with---that is, any grammar that could also generate the sentences we actually use in real English. If you buy that the role of grammars is to model the languages that people actually use, then this is clearly a failure of modelling---yet is a consequence of the recursively generative nature of grammars loved by formalists. In practice, we humans don't work in this neatly recursive way---the complex, messy architecture of our brains means that only some kinds of embedding are easily processed, and it turns out, as expounded by Richard Hudson, that the actual criteria are far more complex than anyone would have thought. For example, it appears to make a difference whether the subjects of the clauses being nested are pronouns or not.

    A related and much more familiar example for most readers will be the halting problem. We know that no tool which attempts to answer an undecidable question can be complete. Therefore, a lot of researchers just avoid those problems. Byron Cook, who in recent years has been doing stellar and pioneering work on proving program termination, has been known to talk about the variety of bafflement, disdain and ridicule which his work provoked in its inception. It's simply anathema to conventional CS research approaches to attack problems for which it's known that there can be no complete solution. In fact, far too many self-professing computer scientists don't even understand the distinction between a partial and total solution. We have to get over this, simply because lots of important problems are in this class! I like to think my work on Cake is doing its bit. The halting problem is somewhat different from Cake-like problems, in that its' a formally defined, provably-undecidable problem, whereas the incompleteness of systems like Cake lies in that we can't even completely define the problem space in any formal way. I should add that I'm not putting my work in the same bracket as Byron's! But there is at least this small thematic similarity in the flavour of criticism that tends to come up. Speaking of which, I could do with remembering this argument when it's time for my viva....

    [/research] permanent link

    Wed, 02 Jun 2010

    Making a SPLASH

    So my paper about Cake was accepted for OOPSLA at SPLASH---hooray! You can find a preprint in my publications section. Overall the reviews were positive, modulo a few legitimate grumbles about related work and evaluation. I still have implementation work to do, but I'm hoping to make a big software release at the end of the summer---as well as, hopefully, submitting my dissertation. The acceptance is a nice vindication of my work (and, not being completely unvindictive, I have to say it's a welcome rebuttal to the naysayers!).

    This is the first full-length research paper I've published, and has taken me a despairingly long time, but I finally feel as though I'm getting the hang of it all. I'm also finding that the infrastructure I've built can be applied to many different problems, and I have a giant and growing list of ideas to pursue in future work. The one thing that lets me down is my ability to implement my ideas to any reasonable schedule! Perhaps the knack is being very careful (not to mention experienced) about choosing which ideas to pursue, and how. Oh, another thing that lets me down is not having a job lined up for after I finish this pesky PhD....

    [/research] permanent link

    Wed, 21 Apr 2010

    Multi-core madness

    A while back I posted a rant to our netos mailing list a while ago, which I think says enough about my attitude to multi-core programming research that I should blog it here.

    My rant was prompted by the following call for papers for a special issue of IEEE software.

    “Software for the Multiprocessor Desktop: Applications, Environments, Platforms”

    Guest Editors:

    • Victor Pankratius (Karlsruhe Institute of Technology)
    • Wolfram Schulte (Microsoft Research)
    • Kurt Keutzer (Univ. California Berkeley)

    Multicore processors, like Nehalem or Opteron, and manycore processors, like Larrabee or GeForce, are becoming a de facto standard for every new desktop PC. Exploiting the full hardware potential of these processors will require parallel programming. Thus, many developers will need to parallelize desktop applications, ranging from browsers and business applications to media processors and domain-specific applications. This is likely to result in the largest rewrite of software in the history of the desktop. To be successful, systematic engineering principles must be applied to parallelize these applications and environments.

    [continues...]

    My rant: what desktop applications actually are there which need to take advantage of this wonderful new hardware? The CfP eagerly suggests rewriting a shedload of existing software, but that seems like a giant waste of effort -- at least in the common case where the existing software runs perfectly well enough on not-so-many-core hardware. This is true of pretty much all existing desktop software as far as I can see.

    There might be new application classes out there, or new compute-intensive features that'd benefit existing applications, but that wouldn't be rewriting, and in any case the CfP doesn't identify any....

    [/research] permanent link

    Mon, 19 Apr 2010

    Separating computation from storage

    One of the nice things about laziness is that it eliminates the distinction between stored and computed values. You can write the same code without caring whether the values it manipulates were discovered by on-demand computation or were retrieved from some storage location. In purely functional languages, this works by throwing away all explicit notion of storage, then relying on the runtime to come up with a sensible execution strategy which exploits the underlying machine's storage. Experience shows that runtimes aren't yet clever enough to do especially good jobs of this: Haskell programs tend to use lots of memory, and/or to include programmer-inserted strictness annotations which hint at a better strategy.

    The distinction between computation and storage in languages is well-known to be problematic. Stored data representations are a very change-prone design decision, hence justifying the common practice in object-oriented code (and other!) of using getters to access remote modules' state, rather than having them expose member variables directly. The computation-oriented interface is more general in that intervening computation can be inserted, or not---if the “got” representation matches what's in memory, the getter can just redirect to storage. Conversely, interposing on storage accesses, while possible using memory protection techniques (like the pairing of mprotect() and an appropriate SIGSEGV handler on Unix platforms) is inefficient on conventional hardware, violates most languages' abstraction boundaries and is not easy for user-level programmers to get working. This interposability motivation for getters and setters (and and the change-resilience it enables) is far stronger than a purely abstraction-oriented motivation. The argument is essentially the same as Parnas's from 1972, but this line of thinking still evades some bloggers.

    Recently I've been writing some code using ptrace() and libunwind and finding myself with a particular requirement: implementing data structures that can work with various implementations of storage. Specifically, one feature of libunwind is that it can unwind the stack in your current process, or in another, using the same interface in each case. This kind of design is a Good Thing in much runtime infrastructure and debugging support generally, because you may or may not want a process separation in the picture: separation is good for isolation, but badly affects performance. Now, libunwind abstracts storage using a read--write pair of memory access functions. This is fine for simple reads and writes. Unfortunately I want something more demanding: I want to traverse some data structure residing in the target process. (As it happens, this data structure is some heap bookkeeping information that is maintained by a set of glibc malloc hooks I wrote as part of the Cake runtime.)

    Generic programming ought to be great for this. Unfortunately, at least in the form of contemporary C++, it isn't enough. In C++, the notion of memory is so pervasive that it can't be fully abstracted. That's not to say you can't try to get most of the way there, and the STL's allocators go some way---but not far enough. Although we can alter how they allocate storage, since STL containers are not parameterised in the pointer types they use internally, we can't make them access their own implementation-specific data structures in a customised way. (See this snippet about overloading the & operator, from an article by Andrei Alexandrescu, for more evidence and examples.)

    If we fall back on hand-coding data structures and algorithms ourselves, we can make some headway. The first step is to define a C++ “pointer-like thing” which actually uses the accessor functions. Here's my first attempt. Among other limitations, it can only read, not write, its pointed-to data.

    template <typename Target>
    class unw_read_ptr
    {
        unw_addr_space_t as;
        Target *ptr;
        mutable Target buf; // HACK: temporary to make operator-> work
    public:
        typedef unw_read_ptr<Target> self_type;
        unw_read_ptr(unw_addr_space_t as, Target *ptr) : as(as), ptr(ptr) {}
        Target operator*() const 
        { 
            Target tmp; 
            assert(sizeof tmp % sizeof (unw_word_t) == 0); // simplifying assumption
            unw_word_t *tmp_base = reinterpret_cast<unw_word_t*>(&tmp);
            for (unw_word_t *tmp_tgt = reinterpret_cast<unw_word_t*>(&tmp);
                tmp_tgt - tmp_base < sizeof tmp / sizeof (unw_word_t);
                tmp_tgt++)
            {
                off_t byte_offset 
                 = reinterpret_cast<char*>(tmp_tgt) - reinterpret_cast<char*>(tmp_base);
                unw_get_accessors(as)->access_mem(as, 
                    reinterpret_cast<unw_word_t>(reinterpret_cast<char*>(ptr) + byte_offset), 
                    tmp_tgt,
                    0,
                    NULL);
    		}            
            return tmp;
        }
        // HACK: operator-> brokenly demands return of a real pointer...
        // ... so use a per-object temporary. FIXME
        Target *operator->() const { this->buf = this->operator*(); return &this->buf; } 
        self_type& operator++() // prefix
        { ptr++; return *this; }
        self_type  operator++(int) // postfix ++
        { Target *tmp; ptr--; return self_type(as, tmp); }
        self_type& operator--() // prefix
        { ptr++; return *this; }
        self_type  operator--(int) // postfix ++
        { Target *tmp; ptr--; return self_type(as, tmp); }
        
        // we have two flavours of equality comparison: against ourselves,
        // and against unadorned pointers (risky, but useful for NULL testing)
        bool operator==(const self_type arg) { 
        	return this->as == arg.as
            && this->ptr == arg.ptr; 
        }
        bool operator==(void *arg) { return this->ptr == arg; }
        
        bool operator!=(const self_type arg) { return !(*this == arg); }
        bool operator!=(void *arg) { return !(this->ptr == arg); }
    
    	// default operator= and copy constructor work for us
        // but add another: assign from a raw ptr
        self_type& operator=(Target *ptr) { this->ptr = ptr; return *this; }
    
        self_type operator+(int arg)
        { return self_type(as, ptr + arg); }
    
        self_type operator-(int arg)
        { return self_type(as, ptr - arg); }
    
        ptrdiff_t operator-(const self_type arg)
        { return this->ptr - arg.ptr; }
        
        operator void*() { return ptr; }
    };
    

    This has got me as far as being able to traverse a linked list residing in the target process using the same code you'd use to traverse a local one. Unfortunately, a linked list doesn't cut it for my performance requirements: the target process heap contains many thousands of allocated blocks, and I need to be able to resolve a heap address to a particular block quickly. So, perhaps a hash table or a red--black tree would be a good choice. This is where the pain hits: I really don't want to create my own implementations of either of these. I could cannibalise the source of existing one (and I think that's just what I'm going to do) but it'd be nice to take an STL-like container and just use it as-is. (I am planning to use google-sparsehash, and create a hacked version of the lookup function, using my special pointer class above, for the “separate process” case.)

    A final conclusion is that polymorphism is all very well, but only when the programmer can be oblivious to it. Polymorphism is after all a very low-level concept. Why should we require separate implementations of a function for operating on different kinds of data? From low-level programming we have got used to the idea that data comes in different forms, like ints and floats and lists and arrays and so on, and that these are treated separately unless some polymorphic cleverness unifies them. But in a truly high-level programming language, it should be a given that when your code is abstract with respect to your data structures' representations, or with respect to any other underlying logic (such as memory access, in my example), then you can mix-and-match any implementation you like for that underlying logic.

    Under this criterion, ML-style parametric polymorphism wins nicely, because type inference means that the programmer doesn't need to care about the machinery surrounding polymorphism. In languages where programmer anticipation is required---such as by adding particular type parameters in the case of C++ templates, or by writing particular type annotations as one might in Haskell---then we are forcing the programmer to be aware of these low-level distinctions, so have not yet delivered obliviousness. (I distinguish Haskell from ML because idiomatically, my limited experience suggests that Haskell programs seem to contain an awful lot more type annotations than ML programs do. I will stand corrected if this turns out not so or not important!) Even ML inflicts polymorphism on the programmer in its indecipherable compile-time type errors, but maybe someone can write a compiler which makes things comprehensible.

    [/research] permanent link

    Fri, 02 Apr 2010

    Where the time goes

    At 2pm today I finished making some comments on a print-out on a draft undergraduate dissertation of one of my students. He's out of town at the moment, so I scanned in my scribblings and prepared to send them. I'd printed the 68-page draft 2-up to save paper. We have a nifty scanner that can sheet-feed, so scanning 34 pages was very quick and easy, except for one problem: for some reason the pages came out in reverse. I later discover that the reason the pages came out reversed is that although I placed the pages face-down in the feeder, as the markings on the feeder suggested I should, the software has a very helpful option called “scan facing forward“ (or somesuch) which reverses this logic. More annoyingly, this is turned on by default! Unfortunately the scanner software has no facility to re-order pages (except during scanning). No problem, I thought -- I'll go back to my desk and re-order them in my familiar Unix tools, rather than spending more time down at the scanner.

    It should be simple: split the PDF into pages using gs -sDEVICE=pdfwrite -sOutputFile=page%d, reorder the pages using ls | tac, and pdfjoin. Problem one: I couldn't seem to get page%d working with Ghostscript's pdfwrite driver, so had to use psrgb and then convert the Postscript output to PDF using ps2pdf. This should work, surely? No. Problem two: neither ps2pdf nor Ghostscript understands page orientation properly, so what I get is a portrait-orientation window onto my landscape-oriented pages, with the right-hand side cut off. This web page reveals that it's a limitation of both Ghostscript and its pdfwrite driver. This bug report shows that the problem has been around for years and hasn't gone away. None of the suggested Ghostscript incantations on this web page has any effect for me, but I waste an hour trying them and various other combinations of options.

    It's now 4.22pm, and I've just tried using Adobe Acrobat to perform the same feat. It has exactly the same problem! As my time gets more contended, it gets more and more annoying to spend it fighting crappy software. I could probably optimise this by developing the judgement not to pursue the “should work” way, and instead cut straight to the “ugly, but definitely works” way. I'm about to go down and re-scan the document.

    [/meta] permanent link


    Powered by blosxom

    validate this page