The Entropic Project
The Entropic EPSRC ROPA Project
 Contents Entropic  NetOS 

 Overview Entropic  NetOS 
Present resource management schemes are based on models, explicit or implicit, of how customers behave. Unfortunately generating models of complex multimedia applications for resource management purposes has so far met with very little practical success.

However, a related complex modelling problem is found in statistical mechanics, and in this domain the mathematics of Large Deviation Theory has been successfully applied; the approach is to derive estimator functions and bounds for the behaviour of the system from observation and measurement rather than from a multi-parameter model.

This project applies these techniques to the resource management problem in multiservice networks and multimedia operating systems, the key tasks are: deriving the observables required to provide the estimator functions; implementation of online measurement of such observables; derivation of admission criteria; and finally feedback mechanisms to enable applications to adapt their behaviour.

Entropic, a three year EPSRC ROPA project ending in 1998, is a joint effort between the University of Cambridge and the University of Glasgow. More details of the project can be found on the Glasgow WWW Server. The work is using theory developed in the Measure project.


 Objectives Entropic  NetOS 
Multiservice networks and multimedia operating systems have a common problem, resource allocation. In order to support quality of service guarantees, network resources must be allocated between channels and operating system resources must be allocated between applications. How is this to be done?

  • Resources may be allocated on the basis of peak demands, in which case they will be underutilised.
  • Statistical multiplexing allows resources to be allocated on the basis of expected aggregate demand, in which case any guarantees will be probabilistic.
When supporting multimedia communication or multimedia applications, a probabilistic guarantee is often sufficient and allows optimal use of resources. Hence we wish to understand and make predictions about the nature of the probabilistic guarantee.

The approach taken in this work is based on two observations: traditional approaches based on modelling are proving to be extremely complex, while the aimed for probability of failing to meet a guarantee is extremely low. It is these observations that lead to the use of measurement techniques rather than modelling, and the use of large deviation theory based on estimators from such measurements.

We propose to develop techniques which use online measurement to predict the behaviour of resource usage. These techniques are applicable both to the problem of call admission in networks and to the problem of application startup and resource negotiation in an operating system designed to support multimedia applications.

If the techniques developed by this project can be used both in the end systems and in the networks, then there is potential to tackle one of the serious problems facing distributed multimedia systems, namely the mapping of quality of service (QoS) attributes across the end system/network and network/network boundaries.


 Background Entropic  NetOS 
While little work has addressed the problem of resource management for operating systems supporting multimedia applications, other than architecturally, the approach in multiservice networks has been to model traffic sources and to determine the expected behaviour of a number of sources sharing a common resource - a transmission link for example.

Such schemes have worked well in two extreme circumstances: where customer behaviour is well understood and relatively static; or where applications are simply forced to adapt to the resources provided. Traditional telecommunication services fall into the first category, while data communications fit the latter. Some have even built networks which strictly partition the capacity according to these criteria.

In the first instance emphasis is placed on stochastic modelling of customers and uses the fact that the in-band channels are both static and independent - strong results have been achieved in predicting and bounding the performance of such systems.

In the latter case, a key problem is that it is unrealistic to assume that the customers are independent (in particular their in-band communications are highly correlated), so that straightforward stochastic models cannot be used. Rather the model in this instance specifies causal behaviour and contains no aspects related to timeliness - in these instances we achieve correctness and statements on eventual progress, but little in the way of predicted performance.

Multimedia communication and processing fits neither category. Such applications are not well understood, and laboriously generating a model for a given application may be pointless - in the marketplace an application is often superseded rapidly by better versions or products. Further the sensitivity of models to their parameters can make them useles for making predictions. Nor can the behaviour of multimedia applications be described as static; their behaviour during an invocation can change drastically under user control. Finally while multimedia applications are adaptable over some range of resource availability, they do require some minimal levels of resource, averaged over a range of time-scales, thus requiring some predictive powers from the resource management system.

In a system designed to give probabilistic guarantees where the probability of failure (e.g. cell loss, missed deadline) is low, the probabilities can often be bounded using Large Deviation Theory. When modelling is used, the large deviation rate function is calculated from the model. However, an alternative approach, which bypasses modelling, estimates the rate function directly from measurements (e.g. inter-cell arrival times). This exploits the close analogy between large deviation rate functions and thermodynamic entropy. In it was proposed that the entropy of a cell stream be estimated directly at a multiplexer; from this data, predictions can be made rapidly using algorithms which are simple enough to be executed on-the-fly.

Preliminary investigations using trace data suggested that the method is practicable. Using the Star Wars movie, reconstructed from a trace of the output of a DCT-based VBR video codec at Bellcore, the entropy of the multiplex of several of these cell streams is estimated using the measured cell inter-arrival times. The estimated entropy is used to predict the cell loss ratio for the aggregate traffic and the result is compared with measured values. The original algorithms give good results at 99\% load in a virtual buffer. Recent theoretical work has yielded an improved algorithm which gives good results down to 85\% load.

An important element of the work is the use of observational techniques; this lends itself to environments in which components in the network are unknown and hence cannot be modelled. The ability to deal with such situations is vital in a competitive multi-operator environment, where operators would be unwise to provide detailed information on their network to customers.


 Current Status Entropic  NetOS 
The programme aimed to apply the large deviation technique in three different domains: ATM LANs, ATM WANs and operating systems. Such an approach is required to address the issue of end-to-end quality of service with a realistic scenario.

A reason the project was split across two sites to allow the experimental program to be performed using the SuperJANET ATM service. However, UKERNA has recently signalled the demise of the service, and we are now applying the techniques we have developed to Internet traffic measurement and estimation for QoS provision, together with a large commercial ISP.

The Internet has grown to the point where it is being widely hailed as the vehicle for delivery of advanced communications services of the future. As the network has grown, however, so too have the traffic demands. The absence of support for resource management in the currently deployed Internet protocols has led to a situation where the commercial viability of the network is now being questioned -- how can customers be charged for access when the performance they experience cannot be guaranteed? More importantly, the emergence of Intranets has placed enterprises in a situation where they rely on Internet technology, but have limited understanding of how it will perform, and indeed limited ability to ensure the success of mission critical functions. It is clear that in the emerging commercial IT environment, the ability to control the availability and reliability of the Intranet is a necessity.

We hope to use the remaining effort on the Entropic project to work towards solutions to these pressing problems.


  Entropic  NetOS