Chapter 12 IPC with Shared Memory

Objectives

To show how concurrent programming languages support process interactions based on shared memory.

Points to emphasise

Possible difficulties

To make the topic a reasoned development rather than a description of programming language syntax. The details are needed as a basis for comparison, but comparison must be at the semantic level. Students are confused by the fact that semaphore operations and condition variable operations are different. (You always block on a WAIT (c.v.) and SIGNAL (c.v.) has no effect if the queue is empty).

Teaching hints

Pose a series of questions:

How can a compiler make the concurrent programmer’s life easier than just providing semaphores?

A critical region associated with some shared data is an obvious first step. Is this enough? No - how do we synchronise over the state of the shared data?

Conditions on arbitrary expressions on shared variables were tried but are inefficient to implement. Allow the programmer to declare condition variables and use SIGNAL and WAIT operations on them. This is more efficient but still low level, as hard as semaphores to get correct.

How about synchronising at the level of operations? How could you implement that for a passive structure like a monitor? Path expressions were tried (Path Pascal). Still difficult to get right and you can’t build in fully general dynamic, runtime behaviour.

Why not make the "monitor" active and let an internal process decide which invocations to accept? Dijkstra’s guarded commands can be used as the basis for this (see Ada).

Suppose there are many objects of a given type. How do you allow processes to access different objects concurrently but only one to access a given object at one time?

Give the students practical experience if possible. It would be good if a number of different concurrency constructs could be tried. See Part V of this guide - suggestions for project work. SR and Pascal FC provide a suitable environment.