next up previous contents
Next: The IETF Multiplex Up: Coding and Compression Previous: Wavelet, Vector Quantisation, and

Multiplexing and Synchronising

In networked multimedia standards, the multiplexing function defines the way that multiple streams of different or the same media of data are carried from source to sink over a channel. There are at least three completely different points in this path where we can perform this function: we can design a multi-media codec which mixes together the digital coded (and possibly compressed) streams as it generates them - possibly interleaving media at a bit by bit level of granularity; we can design a multiplexing layer that mixes together the different media as it packetizes them, possibly interleaving samples of different media in the same packets; or we can let the network do the multiplexing, packetizing different media streams completely separately.

The approaches have different performance benefits and costs, and all three approaches are in use for Internet Multimedia. Some of the costs are what engineers call ``non-functional'' ones, which derive from business cases of the organisations defining the schemes.

There are a lot of players (``stakeholders'') in the multimedia market place. Many of them have devised their own system architectures - none the least of these are the ITU, ISO, DAVIC and the IETF.

The ITU has largely been concerned with videotelephony, whilst DAVIC has concerned itself with Digital Broadcast technology, and the IETF has slowly added multimedia (store and forward and realtime) to its repertoire.

Each group has its own mechanism or family of mechanisms for identifying media in a stream or on a store, and for multiplexing over a stream. The design criteria were different in each case, as were the target networks and underlying infrastructure. This has led to some confusion which wil lprobably persist for a few years now.

Here we look at the 4 major players and their three major architectures for a multimedia stream. Two earlier attempts to make sense out of this jungle were brave goals of Applet and Microsoft, and we briefly discuss their earlier attempts to unravel this puzzle - Microsoft have made recent changes to their architecture at many levels and this is discussed in their product specifications and we will not cover it here.

To cut to the chase, the ITU defines a bit level interleave or multiplex appropriate to low cost, low latency terminals and a bit piple model of the network, while ISO MPEG group defines a CODEC level interleave appropriate to digital multimedia devices with high quality, but possibly higher cost terminals (it is hard to leave out a function); finally, the DAVIC and Internet communities define the multiplexor to be the network, although DAVIC assume an ATM network whereas the Internet community obviously assume an IP network as the fundamental layer.

The Internet community tend to try to make use of anything that its possible to use, so that if an ITU or DAVIC or ISO CODEC is available on an Internet capable host, someone, somewhere will sometime devise a way to packetize its output into IP datagrams. The problem with this is that it means that for the non-purist approaches of separate media in separate packets, there are potentially then several layers of multiplexing. In a classic paper[#!lamux!#], David Tennenhouse describes the technical reasons why this is a very bad architecture for communicating software systems. Note that this is not a critique of the ISO MPEG, DAVIC or ITU H.320 architectures: they are beautiful pieces of design fit for a particular purpose; it is merely an observation that it is better to unpick their multiplex in an Internet based system. It certainyl leads to more choice for where to carry out other functions (e.g. mixing, re-synchonisation, trans-coding, etc etc).

In the next sub-sections, we describe these schemes in some details.



 
next up previous contents
Next: The IETF Multiplex Up: Coding and Compression Previous: Wavelet, Vector Quantisation, and
Jon CROWCROFT
1998-12-03