next up previous
Next: The Virtual Processor Interface Up: The Design and Implementation Previous: Requirements on the Operating

Structural Overview

 

Nemesis is structured to provide fine-grained resource control and to minimise application crosstalk. To meet these goals it is important to account for as much of the time used by an application as possible, to keep the application informed of its resource use, and enable the application to schedule its own subtasks. At odds with this desire is the need for code which implements concurrency and access control over shared state to execute in a different protection domain from the client (either the kernel or a server process).

A number of approaches have been taken to try and minimize the cost of interacting with such servers. One technique is to support thread migration; there are systems which allow threads to undergo protection domain switches, both in specialised hardware architectures [9] and conventional workstations [10]. However, such threads cannot easily be scheduled by their parent application, and must be implemented by a kernel which manages the protection domain boundaries. This kernel must as a consequence, provide synchronisation mechanisms for its threads, and applications are no longer in control of their own resource tradeoffs.

The alternative is to implement servers as separate schedulable entities. Some systems allow a client to transfer some of their resources to the server to preserve a given across server calls. The Processor Capacity Reserves mechanism [11] is the most prominent of these; the kernel implements objects called reserves which can be transferred from client threads to servers. This mechanism can be implemented with a reasonable degree of efficiency, but does not fully address the problem:

Nemesis takes the approach of minimising the use of shared servers so as to reduce the impact of application crosstalk: the minimum necessary functionality for a service is placed in a shared server while as much processing as possible is performed within the application domain. Ideally, the server should only perform privileged operations, in particular access control and concurrency control.

A consequence of this approach is the desire to expose some server internal state in a controlled manner to client domains. Section v describes how the particular use of interfaces and modules in Nemesis supports a model where all text and data occupies a single virtual address space facilitating this controlled sharing. It must be emphasised that this in no way implies a lack of memory protection between domains. The virtual to physical address translations in Nemesis are the same for all domains, while the protection rights on a given page may vary. What it does mean is that any area of memory in Nemesis can be shared, and virtual addresses of physical memory locations do not change between domains.

    figure245
Figure 2: Nemesis system architecture

The minimal use of shared servers stands in contrast to recent trends in operating systems, which have been to move functionality away from client domains (and indeed the kernel) into separate processes. However, there are a number of examples in recent literature of services being implemented as client libraries instead of within a kernel or server. Efficient user-level threads packages have already been mentioned. Other examples of user level libraries include network protocols [12], window system rendering [13] and Unix emulation [14].

Nemesis is designed to use these techniques. In addition, most of the support for creating and linking new domains, setting up inter-domain communication, and networking is performed in the context of the application. The result is a `vertically integrated' operating system architecture, illustrated in figure 2. The system is organised as a set of domains, which are scheduled by a very small kernel.




next up previous
Next: The Virtual Processor Interface Up: The Design and Implementation Previous: Requirements on the Operating

I. Leslie, D. McAuley, R. Black, T. Roscoe, P. Barham, D. Evers, R. Fairbairns & E. Hyden