Concurrency Control

When a server is called, there are a number of choices as to how the global state of the server program is maintained:
  1. The server may be ;SPM_quot;static;SPM_quot;. It may persist through all the calls from all its different clients. Any global state changed by calls may affect other calls.
  2. It may be ;SPM_quot;dynamic;SPM_quot;. it may be created to service each call, and then evaporate after each and every call, usually losing any accumulated state. This approach was taken by the Xerox Courier system, for instance.
  3. It may be ;SPM_quot;static;SPM_quot;, but only service calls from a single client. When a client first calls the server, a new instance is created. This persists until the client indicates its last call, or is destroyed.
The consequence of choosing the first mechanism rather than the last two is that concurrent access to the server may result in interleaved changes to the global state. This may require special mechanisms separate from the RPC system to allow the programmer control over this concurrency. One solution is to wrap up all servers as monitor. Another extreme is that taken by Sun RPC: It constrains the programmer to insure that all calls are idempotent. This means that if a call is repeated the server returns the same result. This can only be the case if all calls are cast in a form that identifies all the state they refer to, and that servers are stateless. This does have one advantage, which is that crashes of server are of no concern to the programmer (apart from for availability/performance reasons). However, it can lead to unnatural interfaces. The dynamic server approach can be made to perform well if some lightweight process mechanism is provided [e.g. MACH threads]. In this type of system, it is possible to have concurrent server processes without creating/destroying the full context associated with normal users processes.