The Java Message Service (JMS) is a specification that provides a consistent Java API for accessing message-oriented middleware services. This paper presents a test harness that automates the testing of JMS implementations (providers) for correctness and performance. Since the JMS specification is expressed in informal language, a formal model for JMS behaviour is developed, based on the I/O automata used in other group communication systems. The test harness has been successfully used to test a number of JMS implementations. This paper contains a descriptive presentation of the formal model, the full details are found in a technical report.
Java-based distributed applications generally use RMI (Remote Method Invocation) for accessing remote objects. When used in a wide-area environment, the performance of such applications can be poor because of the high latency of RMI. This latency can be reduced by caching objects at the client node. However, the use of caching introduces other issues, including the expense of caching the object as well as the expense of managing the consistency of the object. This paper presents a middleware for object caching in Java RMI-based distributed applications. The mechanisms used by the middleware are fully compatible with Java RMI and are transparent to the clients. Using this middleware, the system designer can select the caching strategy and consistency protocol most appropriate for the application. The paper illustrates the benefits of using these mechanisms to improve the performance of RMI applications.
Entity Beans provide both data persistence and the possibility of caching objects and data in the middle-tier. The EJB 1.1 specification has three commit options which determine how EJBs are cached across transactions: Option C pools objects without identity; Option B caches objects with identity; Option A caches objects and data. This paper explores the impact on performance of these different commit options, pool and cache sizes on a realistic application using the Borland Application Server.
Nomadic computing imposes a set of serious problems and new requirements onto middleware platforms supporting distributed applications. Among these are the characteristics of wireless links like sudden and frequent disconnection, long roundtrip times, high bit error rates and small bandwidth. But there are also new requirements like handover support and the necessity to use different networks (bearers). All these problems and requirements lead to the demand for an association between client and server that is independent of a transport connection. In this paper, we present a session layer that provides such an association for the middleware platform CORBA based on the Wireless Application Protocol (WAP) that is especially designed for mobile and wireless devices. It turns out that the session protocol in WAP called WSP is not able to fulfill our requirements, thus, it was necessary to define our own session layer. The session layer provides explicit and implicit mechanisms to suspend and resume a session, a reconnection to the session after the bearer was lost or changed and a solution to the lost reply problem. Furthermore, it contains an interface to be used by session-aware applications to control the presented mechanisms themselves on a fine-grained level. This paper presents a detailed description of the session layer, its integration into CORBA, a mapping of GIOP messages onto WTP and selected implementation details.
The proliferation of mobile devices and new software creates a need for computing environments that are able to react to environmental (context) changes. To date insufficient attention has been paid to the issues of defining an integrated component-based environment which is able to describe complex computational context and handle different types of adaptation for a variety of new and existing pervasive enterprise applications.
In this paper a run-time environment for pervasive enterprise systems is proposed. The associated architecture uses a component based modelling paradigm, and is held together by an event-based mechanism which provides significant flexibility in dynamic system configuration and adaptation. The approach used to describe and manage context information captures descriptions of complex user, device and application context including enterprise roles and role policies. In addition, the coordination language used to coordinate components of the architecture that manage context, adaptation and policy provides the flexibility needed in pervasive computing applications supporting dynamic reconfiguration and a variety of communication paradigms.
This paper describes an experimental study in the use of a composable proxy framework to improve the quality of interactive audio streams delivered to mobile hosts. Two forward error correction (FEC) proxylets are developed, one using block erasure codes, and the other using the GSM 06.10 encoding algorithm. Separately, each type of FEC improves the ability of the audio stream to tolerate errors in a wireless LAN environment. When composed in a single proxy, however, they cooperate to correct additional types of burst errors. Results are presented from a performance study conducted on a mobile computing testbed.
Applications that process continuous information flows are
challenging to write because the application programmer must deal with
flow-specific concurrency and timing requirements, necessitating the explicit
management of threads, synchronization, scheduling and timing. We believe
that middleware can ease this burden, but middleware that supports control-flow
centric interaction models such as remote method invocation does not match the
structure of these applications. Indeed, it abstracts away from the very
things that the information-flow centric programmer must control.
We are defining Infopipes as a high-level abstraction for information flow, and
we are developing a middleware framework that supports this abstraction
directly. Infopipes handle the complexities associated with control flow and
multi-threading, relieving the programmer of this task. Starting
from a high-level description of an information flow configuration, the
framework determines which parts of a pipeline require separate threads or
coroutines, and handles synchronization transparently to the application
programmer. The framework also gives the programmer the freedom to write or
reuse components in a passive style, even though the configuration will actually
require the use of a thread or coroutine. Conversely, it is possible to write a
component using a thread and know that the thread will be eliminated if it is
not needed in a pipeline. This allows the most appropriate programming model to
be chosen for a given task, and existing code to be reused irrespective of its
activity model.
Applications often use objects that they do not create. In general, these objects belong to an execution environment and are used for some services (server objects). This makes applications strongly dependent on these objects and make them vulnerable to any modifications to these objects. In this paper, we present a solution to this problem through the service group concept. A service group is an intermediary between the applications and the server objects. A service group is defined by the administrator of the shared services, for the entire set of client applications. A service group embodies a collection of signatures corresponding to the provided services and maintains the required associations between these signatures and the actual implementations of these services by the server objects. The client applications access the services through service groups, and are no longer directly related to servers, thus becoming more independent and better protected from modifications to the server objects. The service group will not only make it possible to pool disparate services in order to structure execution environment, but also to construct new services by composing existing ones.
Middleware has emerged as an important architectural component in modern distributed systems. Most recently, industry has witnessed the emergence of component-based middleware platforms, such as Enterprise JavaBeans and the CORBA Component Model, aimed at supporting third party development, configuration and subsequent deployment of software. The goal of our research is to extend this work in order to exploit the benefits of component-based approaches within the middleware platform as well as on top of the platform, the result being more configurable and reconfigurable middleware technologies. This is achieved through a marriage of components with reflection, the latter providing the necessary levels of openness to access the underlying component infrastructure. More specifically, the paper describes in detail the OpenCOM component model, a lightweight and efficient component model based on COM. The paper also describes how OpenCOM can be used to construct a full middleware platform, and also investigates the performance of both OpenCOM and this resultant platform. The main overall contribution of the paper is to demonstrate that flexible middleware technologies can be developed without an adverse effect on the performance of resultant systems.
Object migration is an often overlooked topic in distributed object-oriented platforms. Most common solutions provide data serialization and code mobility across several hosts. But existing mechanisms fall short in ensuring consistency when migrating objects, or agents, involved in coordinated interactions with each other, possibly governed by a multi-phase protocol. We propose an object migration scheme addressing this issue, implemented on top of the Coordination Language Facility (CLF). It exploits the particular combination of features in CLF: the resource-based programming paradigm and the communication protocol integrating a negotiation and a transaction phase. We illustrate through examples how our migration mechanism goes beyond classical solutions. It can be fine-tuned to consider different requirements and settings, and thus be adapted to a variety of situations.
Although it has long been realised that ACID transactions by themselves are not adequate for structuring long-lived applications and much research work has been done on developing specific extended transaction models, no middleware support for building extended transactions is currently available and the situation remains that a programmer often has to develop application specific mechanisms. The CORBA Activity Service Framework described in this paper is a way out of this situation. The design of the service is based on the insight that the various extended transaction models can be supported by providing a general purpose event signalling mechanism that can be programmed to enable activities - application specific units of computations – to coordinate each other in a manner prescribed by the model under consideration. The different extended transaction models can be mapped onto specific implementations of this framework permitting such transactions to span a network of systems connected indirectly by some distribution infrastructure. The framework described in this paper is an overview the OMG’s Additional Structuring Mechanisms for the OTS standard now reaching completion. Through a number of examples the paper shows that the Framework has the flexibility to support a wide variety of extended transaction models. Although the framework is presented here in CORBA specific terms, the main ideas are sufficiently general, so that it should be possible to use them in conjunction with other middleware.
Before using middleware in critical systems, integrators need information on the robustness of the software. This includes having a clear idea of the failure modes of middleware candidates, including their core and side elements. In this paper we describe ongoing work on the failure mode analysis of CORBA-based middleware. Our initial work targets the CORBA Name Service, and the characterization is addressed using fault injection techniques. We present the results of injecting corrupted messages at the targeted middleware. Our experiments have been performed on four different implementations, and some comparisons are provided. First lessons learnt from these experiments, from a critical system integrator's viewpoint, are also reported.
This research focuses on the development of a generic home network architecture and related middleware for supporting interoperability between ubiquitous consumer devices and computing devices in the home. In order to build a practical home network, design criteria are discussed, including real-time constraints and interoperability between heterogeneous protocols. As a result, a vertically configurable home network architecture and real-time middleware, called ROOM-BRIDGE, are presented that meet the design criteria. ROOM-BRIDGE is specially designed and implemented using an IEEE1394 backbone network to provide the proposed network architecture with a guarantee of seamless and reliable communication between heterogeneous sub-networks and ubiquitous devices in the home. The performance of the proposed network architecture and ROOM-BRIDGE was verified by prototype implementation and testing using a home network test bed.
In this paper, we demonstrate how component-based middleware can reduce the energy usage of closed-source applications. We first describe how the Puppeteer system exploits well-defined interfaces exported by applications to modify their behavior. We then present a detailed study of the energy usage of Microsoft's PowerPoint application and show that adaptive policies can reduce energy expenditure by 49% in some instances. In addition, we use the results of the study to provide general advice to developers of applications and middleware that will enable them to create more energy-efficient software.
This paper reports our ongoing project to build system software
for audio and visual networked home appliances. In our system, we have
implemented two middleware components for making it easy to build future
networked home appliances. The first component is distributed home computing
middleware that provides high level abstraction to control respective home
appliances. The second component is a user interface middleware that enables us
to control home appliances from a variety of interaction devices.
Most of our system have been implemented in Java, but several timing critical
programs have been implemented in the C language, which runs on Linux. The
combination of Linux and Java will be ubiquitous in future embedded systems.
They enable us to port home computing programs developed on PC to target systems
without modifying them, and Java's language supports enable us to build complex
middleware very easily. Also, our user interface middleware enables us to adopt
traditional user interface toolkits to develop home computing applications, but
it allows us to use a variety of interaction devices to navigate graphical user
interface provided by the applications.
OASIS is a role-based access control architecture for achieving
secure interoperation of independently managed services in an open, distributed
environment. OASIS differs from other RBAC schemes in a number of ways: role
management is decentralised, roles are parametrised, and privileges are not
delegated. OASIS depends on an active middleware platform to notify
services of any relevant changes in their environment.
Services define roles and establish formally specified policy for role
activation and service use; users must present the required credentials and
satisfy specified constraints in order to activate a role or invoke a
service. The membership rule of a role indicates which of the role
activation conditions must remain true while the role is active. A role is
deactivated immediately if any of the conditions of the membership rule
associated with its activation become false.
Instead of privilege delegation OASIS introduces the notion of appointment,
whereby being active in certain roles carries the privilege of issuing
appointment certificates to other users. Appointment certificates capture
the notion of long lived credentials such as academic and professional
qualification or membership of an organisation. The role activation conditions
of a service may include appointment certificates, prerequisite roles and
environmental constraints.
We define the model and architecture and discuss engineering details, including
security issues. We illustrate how an OASIS session can span multiple
domains, and discuss how it can be used in a global environment where roving
principals, in possession of appointment certificates, encounter and wish to use
services. We propose a minimal infrastructure to enable widely
distributed, independently developed services to enter into agreements to
respect each other's credentials.
We speculate on a further extension to mutually unknown, and therefore untrusted,
parties. Each party will accumulate audit certificates which embody its
interaction history and which may form the basis of a web of trust.
We present a solution to guarantee scalable causal ordering through matrix clocks in Message Oriented Middleware (MOM). This solution is based on a decomposition of the MOM in domains of causality, i.e. small groups of servers interconnected by router servers. We prove that, provided the domain interconnection graph has no cycles, global causal order on message delivery is guaranteed through purely local order (within domains). This allows the cost of matrix clocks maintenance to be kept linear, instead of quadratic, in the size of the application. We have implemented this algorithm in a MOM, and the performance measurements confirm the predictions.
This paper presents the design and evaluation of Pastry, a
scalable, distributed object location and routing substrate for wide-area
peer-to-peer applications. Pastry performs application-level routing and object
location in a potentially very large overlay network of nodes connected via the
Internet. It can be used to support a variety of peer-to-peer applications,
including global data storage, data sharing, group communication and naming.
Each node in the Pastry network has a unique identifier (nodeId). When presented
with a message and a key, a Pastry node efficiently routes the message to the
node with a nodeId that is numerically closest to the key, among all currently
live Pastry nodes. Each Pastry node keeps track of its immediate neighbors
in the nodeId space, and notifies applications of new node arrivals, node
failures and recoveries. Pastry takes into account network locality; it
seeks to minimize the distance messages travel, according to a to scalar
proximity metric like the number of IP routing hops.
Pastry is completely decentralized, scalable, and self-organizing; it
automatically adapts to the arrival, departure and failure of nodes.
Experimental results obtained with a prototype implementation on an emulated
network of up to 100,000 nodes confirm Pastry's scalability and efficiency, its
ability to self-organize and adapt to node failures, and its good network
locality properties.
Applications built on networked collections of computers are increasingly using distributed object platforms such as CORBA, Java RMI, and DCOM to standardize object interactions. With this increased use comes the increased need for enhanced Quality of Service (QoS) attributes related to fault tolerance, security, and timeliness. This paper describes an architecture called CQoS (Configurable QoS) for implementing such enhancements in a transparent, highly customizable, and portable manner. CQoS consists of two parts: application- and platform-dependent interceptors and generic QoS components. The generic QoS components are implemented using Cactus, a system for building highly configurable protocols and services in distributed systems. The CQoS architecture and the interfaces between the different components are described, together with implementations of QoS attributes using Cactus and interceptors for CORBA and Java RMI. Experimental results are given for a test application executing on a Linux cluster using Cactus/J, the Java implementation of Cactus. Compared with other approaches, CQoS emphasizes portability across different distributed object platforms, while the use of Cactus allows custom combinations of fault-tolerance, security and timeliness attributes to be realized on a per-object basis in a straightforward way.
Different distributed component-based applications (e.g., distributed
multimedia, library information retrieval, secure stock trading applications),
running in heterogeneous execution environments, need different quality of
service (QoS). The semantics of QoS requirements and their provisions are
application-specific, and they vary among different application domains.
Furthermore, QoS provisions vary per applications in heterogeneous execution
environments due to the varying distributed resource availability. Making these
applications QoS-aware during the development phase, and ensuring their QoS
guarantees during the execution phase is complex and hard.
In this paper, we present a unified QoS management framework, called 2KQ+. This
framework extends our existing run-time 2KQ middleware system by including
our uniform QoS programming environment and our automated QoS compilation system
(Q-Compiler). The uniform QoS programming and its corresponding QoS compilation
allow and assist the application developer to build different component-based
domain applications in QoS-aware fashion. Furthermore, this novel programming
and compilation environment enables the applications to be instantiated,
managed, and controlled by the same reconfigurable, component-based run-time
middleware, such as 2KQ, in heterogeneous environments.
Our experimental results show that different QoS-aware applications, using the
2KQ+ framework, get configured and setup fast and efficiently.