The most important property of the n-way conference
to maintain is that of global sequencing.
There are essentially two classes of distribution mechanism for this
kind of application:
-
Mesh
Each conferee runs a program which maintains a set of windows on her
screen/display server, (all in a box, one input window, 1 output window
per other conferee, and some control panels/button boxes...), and uses one
(TCP) connection per other conferee to exchange conference
proceedings (broadcast for data, more complex for floor control
protocol). [This is styled like the BSD talk facility].
-
Star
A central conference server per conference maintains an (X)
connection to each conferees display server, with the same appearance
as above.
The main problem with the first mechanism is that extra protocol
is required to maintain global sequencing of the input and
output to the conference, otherwise the appearance of separate
interleaved conversations may become re-ordered on some (or even all)
of the conference displays. A Common optimisation is to organize the
conferees in a logical ring and pass a token round for sequence control.
The second mechanism does not have the same problem, since the central
conference server can act as a global sequencer. Simply by blocking
input from all subsequent users until the input from the last user has
been successfully output on all the displays, we ensure ordering.
However, this mechanism does have two related problems. First, there
is a large load on the central server. Second, the central server is a
single point of failure.
Either scheme would benefit from a reliable multicast protocol such as
that described in [BiJo87][CrPa88]:
In contrast to either of these, a distributed shared memory model for
distributed programs could be used (albeit, in a distributed system,
this must be on top of some message passing mechanism which would then
require all the global sequence that the mesh approach needs). We have
then just exchanged the sequence problem for that of controlling concurrent
access to shared memory.
Our pilot implementation used the central server model for reasons of
simplicity.