Since we base our model on Abstract Data types, in the simple case of
a single client/server thread, we may be able to analyze the
types of parameters in a method/operation and identify which involve
updating a server, and which do not. From this we can derive which
operations can be concurrent and which can not. Those which require
locking for read inherit this operation as a side effect, while those
which lock out all readers and other writers inherit the appropriate
operation.
Since read type operations, or write operations on independent items are not
dependent, they cannot affect consistency, and can be eliminated from this
analysis syntactically.
This approach is adopted in the ANSA Atomic Object Model by adding
certain information to the interface definitions of object methods:
A Concurrency control manager decides which atomic operations to
schedule based on ;SPM_quot;concurrency predicates;SPM_quot; that are added to the
object definition by the programmer. These will state how operations
that share arguments/objects can be interleaved and ordered. They are
illustrated in figures #fncem1#673> and #fncem2#674>
There are two types of predicate:
-
Ordering predicates control which operations can be interleaved within
a tree of dependent atomic actions in a single tree. If, for example, updating
some database requires several sub-operations and these are all
defined as part of a single superclass, then we would define some
ordering predicates on the sub-operations.
-
Separation predicates allow the concurrency control system to
determine which operations from different (independent) trees of
operations may interleave, and which must be separated by use of locks
or other mechanisms to hide partial states of the transaction from
other operation trees.
A concurrency predicate is in the form:
#figure677#
Figure: Concurrency Predicates
One then forms a Concurrency evaluation matrix for all the operations.
This is
#figure680#
Figure: |label<#681#>fn:cem2<#681#>Concurrency Evaluation Matrix
Each entry shows how the addition of a new operation, opx relates to
outstanding operations. It shows the synchronization operator (as
in 1. above) and the set of argument lists associated with the current
outstanding operations. From these and the modes, we can determine
whether the new operation is allowed now, or must be scheduled as a
pending operation. If we schedule it for later, it is queued until all
the outstanding operations complete and commit or abort.
If operations are nested or related by call back, then it is a lot
more complex. If a nested operation only ever appears in the same
;SPM_quot;place;SPM_quot; in the nesting/hierarchy, then we can apply locking at the
top (outer) level but if two clients may access the same operation,
one directly, the other indirectly, then there's a problem.
The current approach is to examine the dependencies that exist between
transactions that perform operations on a common object. If the transitive
closure of all dependencies (order in which the transactions operate) forms a
partial order, then the transactions can be serialized. If there are cycles,
then the order is ambiguous.