next up previous contents
Next: Evolution of the Internet Up: Network Service Models Previous: User Expectation and Service

Service Schedules and Queues

There performance of a communications path through a number of links is made up of contributions from many places. The raw throughput o each link in the path (the capacity) comes from the technology used, as does the error rate due to noise. The delay for a given path is made of two main contributions: propagation time, and store and forward time in routers and switches, bridges, hubs and so on along the way. For fixed wire terrestrial networks, the propagation time is unalterable, so that the main thing that can be changed is the store and forward time at the interconnect devices. This is just like the time spent in a car journey waiting at traffic lights. In a road system without lights or toll roads, all cars are treated typically the same - the model is ``First Come First Serve'' or ``First In First Out''. The best effort service in the Internet is the same.

To change this service for some users involves recognizing their traffic, and giving it different treatment in the queue. In a lightly loaded network, just as on a lightly loaded road, there is typically no queue! Having said that, it might not do for someone to arrive early - this is captured in the notion of ``work conservation'' in some systems - indeed, in digital telephone networks, it is enforced by a queue discipline which separates all traffic from other traffic and gives each call its own schedule, usually through Time Division Multiplexing.

In the Internet, none of the future Integrated Services that we are going to describe in this chapter do this. Instead what they do is simply allow higher quality traffic to work ahead of lower quality traffic. Thus the notion is that to permit quality of service for some users, the others (at least at busy times) must get less share.

There are quite a few different proposed queueing systems to do this - the baseline for comparison is called Fair Queueing, which is essentially a round robin scheduler for each source-destination pair currently using a route through this particular router. Fair Queueing can be extended to include a notion of weight (i.e. systematic unfairness, perhaps associated with importance or money). Other mechanisms include approximations to fair queueing (e.g. stochastic fair queueing).

A given device can implement several different queueing mechanisms, and sort packets into the appropriate queue based on some notion of packet classification, so that we can retain best effort service while supporting better effort for some users (albeit at the cost of lower share for best effort).


next up previous contents
Next: Evolution of the Internet Up: Network Service Models Previous: User Expectation and Service
Jon CROWCROFT
1998-12-03