next up previous contents
Next: Integrated Services Model Up: Evolution of the Internet Previous: Evolution of the Internet

Classification and Admission

A class is supported typically by some queuing discipline being applied especially to a particular flow of traffic. This may be something that is setup by network manager, maybe programmed into a router, or might be requested by a user (or a site or between one network and another) via some so-called signaling protocol.

In the Internet, the signaling protocol has to provide not only the traffic flow category, and the parameters, but also a way for a router to recognize the packets belonging to the flow, since there is no ``virtual circuit'' or ``flow identifier'' in current internet protocol packets. Of course, IP version 6 will change this, if and when it sees deployment.

This ``classification'' is simply based on a set of packet fields that reman constant for a flow - for example, UDP and TCP port numbers (or any other transport level protocol demultiplexing field), IP level transport protocol identifier, together with a source and destination IP host (interface) addresses, serve to uniquely identify a flow for FTP, Web and most Mbone applications.

To dynamically create this classification, and to set up the right mapping from it into the right queues in routers along a path, the Internet community has divised RSVP, the Resource Reservation Protocol.

When a service request is made, the network has a chance to do something it cannot do in the normal IP router case: It can decide whether it can support the request or not in advance, and has the option to deny access (or at least deny guarantees of service) to a flow. This is known as an ``admission test''. It depends on the user knowing their own traffic patterns, which is not always possible (though for many applications, the programmer may have calibrated them, and wired in these parameters). Where not possible, the network may simply monitor, and carry out ``measurement based admission''.

Parameters for quality of service typically include average and peak values for throughput, delay, and errors; in practice, these may be expressed as burstiness, end to end delay, jitter, and a worst case error (or residual packet loss) rate.

The debate has raged for many years over which parameter set is necessary and sufficient for Internet (computer) based multimedia. When a sender can adapt a send rate dynamically to perceived conditions, and a receiver can adapt to measured conditions (as well as interpolate, extrapolate for loss, excessive delay and so on), more flexibility is possible in these parameters than traditionally provided.

For example, delay adaptation at a receiver can be simple achieved, so that so long as the average rate of traffic is accommodated across the network, and the peak rate is buffered, and that delay variance caused by bursts at peak rate, or by other traffic (depending on the queuing disciplines used in intermediate nodes), doesn't incur more than some peak delay bound, then a smooth playout of media is quite simple.

In fact, a combination of an adaptive playout buffer, and interpolation can tolerate a modest percentage of packets arriving too late. Furthermore, one can divise coding schemes that permit high loss tolerance, which means that they can co-exist with highly bursty traffic with either a more heavily loaded network (i.e. more delay variance caused by more bursts), or less well policed or shaped queues in the intermediate systems. This is discussed in greater detail in chapter five.


next up previous contents
Next: Integrated Services Model Up: Evolution of the Internet Previous: Evolution of the Internet
Jon CROWCROFT
1998-12-03