What is the future of the Internet? We see it as a new kind of distributed database, a single huge document, a tree of strings, divided and replicated across thousands of servers. What security mechanisms will encourage users to share control over data more widely? How far can we detach the question of who controls the servers from who controls the data?
Project Dendros creates a unifying data-management layer for Internet applications. Its core is a flexible and simple data model, essentially a tree of byte strings. This is combined with a modular processing model based on access filters, which implement functions such as storage management, remote read/write access, authentication, access control, revision control, and replication. A particular aim is the design of new cooperative administration concepts that enable participants with limited mutual trust to cooperate effectively in a single global name space.
Dendros is a middleware layer that sits between a reliable transport mechanism – such as TCP/IP – and distributed applications. It differs substantially from procedural middleware systems (RPC, CORBA, Web Services) in that it stores the data itself, instead of just passing on method invocations to other databases. Dendros defines a modular set of methods for reading, editing and managing a standardized distributed data structure. It has more in common with a distributed file system, database, directory, or some peer-to-peer systems than with a remote procedure call facility. Applications cooperate via Dendros by loading their data into its globally distributed tree of strings. They then use the Dendros facilities to access, update, search, protect, replicate and version control this data.
The ultimate aim of Project Dendros is to create, study and optimize a unifying replacement technology for historically grown standards and technologies such as:
We have identified two key research problems on which we need to work before our vision can be fully realized, and they are our focus for this year.
The first focus is the selection and definition of a flexible generic hierarchical data syntax and processing model that is
No existing standard seems to be a good candidate. For instance, ASN.1 and SGML are early 1980s attempts to define generic models for structured file and packet formats. They found their way into some popular applications (X.509, HTML) but turned out to be rather inconvenient to use in practice. XML was later defined as a small subset of SGML’s syntax that can be parsed unambiguously without an externally supplied grammar. It is a significant improvement that generated enormous industry interest in generic hierarchical data models. However, having inherited SGML’s strong orientation towards being a text-document markup language, it still fails on many of the above requirements.
We have now found a suitable new data model, which we call the Compound Data Structure. Some of its key features are:
For example, XML maps very naturally onto the compound model. Element names, attribute names, attribute values and “PCDATA” text are all represented as strings. XML attribute names need to be unique to an element and their relative order does not matter, therefore attribute names are stored in the set of child nodes of the corresponding element string and map to the corresponding value. XML child elements and text content, on the other hand, are stored in the list of child nodes. The tag distinguishes between element names and text content.
In XML terms, compounds can be understood as a generalization and simplification that removes a number of XML’s restrictions, without adding significantly to the complexity of the implementation. Compounds are in a sense a variant of XML in which attribute names and values can be entire XML documents again, and not just flat strings. In addition, every single node in the compound tree, not just elements as in XML, can be annotated with attributes.
File systems map just as easily to compounds. The mapping provided by the set of child nodes corresponds to a subdirectory. For the user of a compound storage system, there is no difference between XML attributes and file system subdirectories. Both are represented by the same set construct. Compounds can be seen as a generalization of file systems, where a node can be a subdirectory and a file at the same time (a useful notion already implemented by HTTP servers via index.html), and where a file is either a flat byte string or an XML-like tree structure.
The very natural and convenient mappings from file systems and many structured file formats to compounds make them an ideal choice as the underlying data model of a distributed storage architecture.
The abstract data model just outlined is merely the foundation of our work. It is then combined with a range of ways for encoding compounds as byte strings, optimized for different needs (human reading, use with plaintext editors, efficient access, compactness, sorting and hashing, etc.).
We also work on a processing model that we believe will be considerably more convenient to use for application developers than existing practice with XML. The key idea is that our compound API will provide a mechanism that enables applications to activate and deactivate various “filters”. These are software layers that provide transformed views of the accessed compound. Each filter interacts with its next higher and lower layer by exchanging compound read/write accesses. Filters add some value for the user, usually by providing an additional layer of abstraction.
A simple example of a filter would be one that transparently detects strings that contain compressed compounds, and decompresses and unpacks their content on-the-fly, like in a compressed file system. Dereferencing symbolic links transparently would be another one. Filters can be activated either by attribute strings with processing-instruction tags found in the stored compound, or by processing-instructions that are embedded in the access path in a request. Remote access to other servers is equally handled by filters, and with processing instructions embedded in query paths, the application has full control over which filters should be applied on a remote server or in the local compound access library. Filters are a kind of driver architecture for compound-access libraries. They embody much of the functional richness of a compound storage system in a very modular way, giving users great control over the kind and location of processing steps applied to their data.
Filters use the same simple compound access interface to communicate with the next layer below that they provide to the layer immediately above. They can therefore be stacked and reordered arbitrarily. The bottom layer below all filters can be a comparatively simple compound storage server, whereas the layer above all filters is the application.
We believe that compounds on their own are a promising successor for XML. We hope that their use will not remain restricted to the distributed data management system designed in this project, but will find their way into many other next-generation Internet technologies.
The second focus of Project Dendros is to provide a distributed storage system that is designed to support the joint administration and long-term maintenance of data collections that are not “owned” by any single person or organization. We are particularly interested in the storage management needs of large, heterogeneous and potentially mutually suspicious communities. For example, many open-source projects or international standardization committees work towards improving a set of files or documents, but many of the members are far from trusting each other completely. In such environments, the object managed by the shared storage system is not just a set of files, but the revision history and audit log that led to their current state.
Existing practice in group storage management usually involves a single central server that defines what the authoritative current snapshot and revision history of a project’s files are. In commonly used repository management software (e.g., CVS, sourceforge.net), compromising the central server is sufficient to make modifications to the archive without automatically making every client in the group aware of the change. Furthermore, compromised central servers could generate significant confusion about the current state of the project by maintaining different conflicting revision histories for different group members. These issues are of particular concern with the development and maintenance of security-critical software that is used to build trusted computing bases of important infrastructure applications.
Potential contributors may also be reluctant to participate in projects where they will not have full control over the integrity of the storage server, and handing over control of the authoritative server from one person or organization to the next can become a major “political” decision.
We therefore aim for a storage architecture with the following properties:
The Dendros replication mechanism aims to allow anyone who might mistrust the operators of other servers to join the peer group with their own server, so as to provide a local guarantee for the availability of the data and the integrity of its audit information. We hope that this technology will help to encourage cooperation of heterogeneous groups over the Internet and reduce or delay the split-ups of collaborative projects into the use of separate storage archives. We aim to replace the need for handing over the effective authority over a storage archive from one individual to the next with a more gradual change of group membership, to help increase the continuity and lifetime of heterogeneous collaborations.
Some of the protocols needed to achieve these goals cause the data traffic between peers to grow quadratically with the number of participating servers. We therefore expect that Dendros will typically be used with not more than 20–30 servers in the “writer” peer group that accepts and agrees on update requests and develops the revision history forward. A larger set of “reader” servers (possibly many thousand) can nevertheless passively copy the updates in the order agreed by the “writers” and verify independently the audit information included. The “readers” independently ensure the availability of the data and its past versions for read access, but not for updates. We believe that a set of a few dozen independently run servers through which all updates need to pass is sufficient to support ensure the long-term continuity of projects. Given the benefits of running strong protocols designed to handle the risk of malicious servers, this approach seems to us preferable to the fully a symmetric networks suggested in many “peer-to-peer” projects.
With regard to practical application needs and demonstration examples, we are motivated in particular by the archival and file-management needs of long-term projects, with an initial focus on collaborative software development, international standardization, and scientific publication.
For more information on Project Dendros, please come back from time to time to our growing collection of working documents and papers:
We have implemented a proof-of-concept prototype library in Perl and have a first production application that benefits substantially from it (the ucampas web content-management system used for our main departmental web pages). We plan to release a fuller, high-performance, documented reimplementation of the library in C for 2009/2010.
Markus G. Kuhn