The low level storage server is responsible for
The LLSS uses the notion of context to determine the physical location of the
objects. In UNIX jargon, a context can be mapped on a unix directory. Every
object does have a context id associated with it which determines the
category this object belongs to. The context can be either determined by the
end application or by the IR database.
The video objects are stored as an H.261 [H.261 reference]compressed stream
of data. The audio objects are stored as an G.722 [G.722
reference]compressed stream of data. With each video or audio stream an
indexing file is generated and stored. The indexing information is used to
provide a quick random access on the video and audio objects.
The LLSS can receive/transmit the audio and video objects using one of two
the storage and retrieval of
synchronized digital data streams to and from the physical devices of the system;
providing random access to the stored monomedia objects based on different
criteria. At the moment random access is just provided on video and audio
objects and is based on time.
the conversion between the format of the stored objects
and that is required, or more suitable, to the client,
choosing which communication medium to use for
data transmission according to the client's available resources.
<#2434#> RTP Protocol<#2434#>:
This protocol is used when transmitting real time data over PSDN networks.
It specifies timestamps and sequence numbers with each packet[RTP reference].
The LLSS is using the same protocol used by Inria Video System (IVS) [#ivs##1#],
which is a software installation of an H.261 codec, thus
allowing clients with no access to hardware codecs to use IVS instead.
<#2436#> H.221 Protocol<#2436#>:
This is a framing protocol for audiovisual data transmission over serial
lines (such as ISDN). A lot of hardware codecs (including the one we use)
generate a stream of H.221 frames [H.221 reference].
To store the data, the LLSS first strips off the H.221 control data, then it
separates the video data stream from the audio data stream (if both do
exist). The audio data is then divided into access units and a timestamp is
allocated for each unit. The error correction framing data is stripped off
the H.261 video data and the raw H.261 data is analyzed and a
timestamp is allocated for each picture.
During data retrieval the opposite process takes place. The H.261 error
correction frames have to be constructed and mixed with the audio data. The
H.221 control data will be added after that.