The OSI Model is often criticised as being overly complex, offering
too many choices. It is usually contrasted with the Internet, or
TCP/IP protocol suite by such critics.
It is hard to separate the implementation from the specification when
analysing these criticisms. For example, the idea that there are <#1644#>
too many<#1644#> layers, simply does not hold water. A TP4/CLNP (the ISO Connection
Oriented Transport Protocol in its appropriate class for running over the ISO
datagram network protocol)
implementation could be almost exactly as efficient as a TCP/IP one.
Indeed there exist implementations that are.
The model has its use as a reference to compare different protocol
systems, and should be considered a major success as that model. The
ISO protocols that instantiate the model in ISO stacks are a
completely separate matter.
The concept of layers introduced in the OSI model has two motivations:
Why has this approach gone astray?
For two reasons (at least), one technical, and the other
Primarily technically, but secondarily politically, it is a modularisation
technique, taken from software engineering, and re-applied to the systems
engineering of communications architectures (a term used instead of
Secondarily technically, but primarily politically, each layer (module)
can be implemented by a different supplier, to a service specification, and must
only rely on the service specifications of other layers(modules)
The layering imposed politically, essentially reflects a protectionist
approach to providers, such as PTTs, software and hardware vendors.
but the world has moved on, and now we have much more mix and match,
and the walls between types of provider have been broken down.
Now, you might get your host from an entertainment company, your
operating system from a PTT (e.g. Unix from AT&T), the communications software
from a university (tcp/ip on a PC from UCL), and so forth.
Software (and other) engineering has moved on a bit, and now
software re-use (through object oriented and other techniques) means
that we can take pieces of code in other peoples products and
efficiently and safely adapt them to our requirements.
Concrete trivial example might be use of bcopy (memcopy) by anyone in
any layer of unix applications, despite its being designed for the o/s
originally, with overloaded assignment in C++
perhaps being better ways to present it to the programmer - but what
we don't have is millions of different copy functions, one for each
layer of software.
Basically, the layer/service model is like an extreme version of
Pascal where you can only declare functions local to their use, and
they can therefore only be used there! of course, the opposite extreme
of C (all functions are global) may be too anarchic as well, although
that argument is really to do with managing type complexity rather
than the function namespace size.