Devices that encode and decode audio and video, as well compress and decompress are called CODECs or CODer DECoders. Sometimes, these terms are used for audio, but mainly they are for video devices.
Voice coding techniques take advantage of features of the voice signal. In the time domain, we can see that there is a lot of similarity between adjacent speech samples - this means that a system that only sends differences between sample values will achieve some compression. We can see tha there are a lot more values in samples with low intensity, than with high. This means that we could use more bits to represent the low values than the high ones. This could be done in a fixed way and A and mu law encodings do just this by choosing a logarithmic encoding. Or we could adapt to the signal, and APCM does this. These techniques can be combined, and the ADPCM (Adaptive Differential Pulse Code Modulation) achieves 50% savings over basic PCM with no apparent loss of quality, and relatively cheap implementation.
More ingenious compression relies on two things: an appreciation of the actual model of speech and a model of the listener. Such techniques usually involve recognizing the actual speech production and synthesising a set of filters which are transmitted to the receiver and used to reconstruct sound by applying them to raw ``sound'' from a single frequency source and a white noise generator - examples of CODECs that are based on this idea are Linear Predictive Coding (LPC) and CELP (Code Excited Linear Predictor. Including a model of how humans perceive sound (so called ``psycho acoustics'') leads to more expensive, but highly effective compression such as is used in MPEG audio CODECs.