next up previous contents
Next: Protocols Up: Analog and Digital Previous: Analog and Digital

What is ``Bandwidth''

The term ``bandwidth'' is used by electrical engineers to refer to the frequency range of an analog signal. Often, especially in the Internet Community, the term is used loosely to refer to channel capacity, or the bit rate of a link. Note that because of sampling, quantization, and compression, the bit rate needed for a given bandwidth analog signal is potentially many times (orders of magnitude even) less than the perfect sample signal requirement would imply.

Analog Audio for humans is roughly in the range 50Hz to 20KHz. Human speech is intelligible, typically even when restricted to the range 1-3KHz, and the telephone networks have taken advantage of this since very early days by providing only limited quality lines. This has meant that they can use low quality speakers and microphones in the handset - the quality is similar to AM radio.

It is not entirely a coincidence, therefore that the copper wires used for transmission in the telephone system were principally chosen for the ability to carry a baseband signal that could convey speech (``toll'') quality audio.

In most systems luckily, phone wires are over-engineered. They are capable of carrying a signal at up to 16 times the ``bandwidth'', of that used by pure analog phones from the home to the exchange over a kilometre, and 300 times this bandwidth up to 100 meters. For the moment, though, the ``last mile'' or customer subscriber-loop circuits have boxes at the ends that limit this to what is guaranteed for ordinary audio telephony, while the rest of the frequencies are used for engineering work.

Video signals, on the other hand, occupy a much wider frequency range. Analog TV, which defined for about 60 years the input, output and transmission standards, has several different standards, but is typically amplitude modulated on a 3.58MHz carrier. The signal that is conveyed on this is a sequence of ``scanlines'', each making up a screen. The scanline is essentially a sample of the brightness and colors across a horizontal line as detected in the camera, and as used to control the electron gun in the TV monitor.

In the CCIR 601 standard, a digital version of this is defined, which two samples of 8 bits each of colour (chrominance) and one of brightness (luminance) at 13/5MHz. The resulting data rate is around 166Mbps per second.

It is not entirely a co-incidence that old cable TV networks are capable of transmitting these data rates; however modern hybrid fiber-coax networks are targeted at carrying a much larger number of compressed digital channels.

The purpose of talking about the media encoding and channel capacity requirements is to show the relationship between particular media, and the transmission technology associated with them. The two largest networks in the world in terms of terminals are the phone network and the TV network. Each addresses a particular capacity and pattern of communication. If a data network such as the Internet is to carry these media, and if the ``terminals'', or workstations and PCs of the Internet are to be able to capture, store, transmit, receive, and display such media, then the network and end systems have to deal with these types of data, one way or another. If we compress the data, and decompress at a receiver, then the rate/capacity is till required outside of the compressed domain, and the compression/decompression engines need to be able to cope with this, even though they may spare the network (or storage device).

The word ``encoding'' is often used as a noun as well as a verb when talking about multimedia. Nowadays, there is a vast range of encodings currently in use or development. There are a variety of reasons for this: Codes for audio, video depend on the quality of audio or video required. A very simple example of this is the difference between digital audio for ISDN telephones (64Kbps PCM see later) and for CD (1.4Mbps 16 bit etc.) 1.1; another reason for the range of encodings is that some encodings include linkages to other media for reasons of synchronization (e.g. between voice and lips); yet another reason is to provide future proofing against any new media (holograms?); finally, because of the range of performance of different computers, it may be necessary to have a ``meta-protocol'' to negotiate what is used between encoder and decoder. This permits programs to encode a stream of media according to whatever is convenient to them, while a decoder can then decode it according to their capabilities. For example, some HDTV (High Definition Television Standards) are actually a superset of current standard TV encoding so that a ``rougher'' picture can be extracted by existing TV receivers from new HDTV transmissions (or from paying back new HDTV videotapes). This principle is quite general.


next up previous contents
Next: Protocols Up: Analog and Digital Previous: Analog and Digital
Jon CROWCROFT
1998-12-03