As indicated above, timeliness can be just as important as correct
results.
The usual way that time constraints are worked out starts at the lowest level
(closest to the hardware), and works upwards through lower priority processes.
If we consider the source of interrupts (say a disk/asynchronous terminal line
or network interface):
-
at some level, data will be blocked into frames or
octets.
-
at some level of timing, there will be a maximum bit rate.
-
there will be a maximum wait time by which we must start processing these
blocks.
When analysing the interrupt or upcall driven code, we simply count the lines of code
(or profile the code). Interrupt service code usually de-queues data, processes
it (in-line - rarely branching or calling other procedures), then enqueues a
processed block (or amalgamated blocks) for a higher (lower priority) process.
By scheduling the most <#604#> urgent<#604#> service routines highest, but also making them
shortest, we can work out the percentage of the CPU time that is used by
them. Then, from what is left, the next priority processes worst case arrival
rate of events can be calculated, and the corresponding time to handle these.
This can go on til the only processes left to run are non-preemptive user
processes, and the CPU ;SPM_quot;bandwidth;SPM_quot; left is what the end user gets.
A robust system for detecting timing violations and avoiding
faulty detections requires the use synchronised clocks.
In a distributed system, this whole analysis is much more complex, as events
may be transmitted over a shared network to remote processes. Since the
network is shared between arbitrary machines/processes, the time for messages
to traverse it may be effectively non-deterministic (i.e. no obvious upper
bound).
This is an undesirable situation, so usually bounds are imposed on message
times, both by the network (time to live field in packets) and by the end
systems (end system to end system timeouts).