Rambles around computer science

Diverting trains of thought, wasting precious time

Thu, 03 Mar 2011

The end-to-end razor

Most of us have heard of Occam's razor. Usually it is invoked as the principle that given two plausible theories, explanations or solutions to a problem, we should prefer to believe the simpler one.

I've always been a fan of “Einstein's razor”, which paraphrases a longer quotation of Einstein by the snappy dictum “Everything should be made as simple as possible, but no simpler.”. The appeal is in its counterbalancing: there is value in simplicity, but there is harm in oversimplification.

A third razor-like object occurs more often in system design. Most practical CS researchers will have read the “End-to-end arguments” paper. Usually, the end-to-end arguments are dumbly invoked to criticise any design which pushes a feature into the lower layers of a complex system (notably the Internet) when it could be implemented higher up. This interpretation is unfortunate. For one, it overlooks at least two subtleties expounded in the original paper: that a key criterion is whether the feature can be implemented completely and correctly at the lower layer, and also whether doing so brings any compulsory overheads (detrimental to applications not requiring the feature). But more importantly, it omits a vital counterbalancing concern: by implementing features higher up, we nearly always end up with not one but many variants of the same feature. Agreeing on which one to use is a hopeless problem of distributed (human) consensus, so we end up with a huge mess of interoperability problems brought on by this unnecessary diversity. So in fact, there are very real incentives for implementing functionality at the lowest sensible layer. The traditional end-to-end arguments don't bring these incentives out.

In fact we should have been paying more attention to Occam all along, because his original statement that entia non sunt multiplicanda praeter necessitatem---“entities must not be multiplied beyond necessity”---is extremely suggestive of the cost of unnecessary diversity. Combining this razor and Einstein's, I prefer a different formulation of the end-to-end arguments, which I hereby name the “end-to-end-razor” (with apologies to anyone who's used that name previously to mean something else). “Everything should be implemented at the lowest sensible layer, but no lower.” You can argue about what's “sensible”, but the criteria are the same as in the original end-to-end arguments. The difference is that the counterbalancing of considerations is explicit: there may be both value and harm in building at lower levels.

Personally, as a programming researcher, I relish the challenge of working at lower levels. Solving a general problem by building a system which is tied to one programming language, for example, seems unsatisfying to me. Not only did the decision to make Cake target object code mean that it provides a somwhat language-independent solution to its problem, but, for me at least, it was just a lot more fun hacking around the OS, linker and C library than it would have been tinkering with a JVM or munging source code. I'm not entirely sure why....

[/research] permanent link


Powered by blosxom

validate this page