Chapter 4 Support for Processes
Objectives
To show the concrete implementation of the process abstraction. To show how processes are scheduled to run on processors. To introduce the special requirements of multiprocessor and real-time systems.
Points to emphasise
- This is a key chapter. Chapters 3, 4 and 5 have discussed OS functions.
Processes provide and invoke these functions - processes make things happen.
- The idea of the process abstraction and its implementation. Focus on a process executing user-level code then consider the management mechanisms that make this possible.
- Pre-emptive and non-pre-emptive scheduling. Note that interrupts are taken by a non-pre-emptively scheduled process but control returns to it whatever process might have been unblocked by the interrupt.
- Although this chapter doesn’t say much about "threads" it is important to introduce the concept. Chapter 4 covered processes sharing part of an address space. Threads share all the resources of a "process" including the whole address space. Note there is no protection between threads.
Possible difficulties
The basic material on implementing and scheduling processes should not present any difficulties. There are tricky questions like "what executes the process management module" which one should be prepared for but are rarely asked.
One could spend a great deal of time on sophisticated scheduling algorithms with associated performance analysis, but I feel this is best done in the context of an advanced course on specific types of system. For example, real-time systems have special requirements which are only hinted at here. Future systems which support continuous media will handle scheduling quite differently from current systems. One shouldn’t be dogmatic about the way things are and must be.
Teaching hints
- The design of a process implementation draws on previous material. A process has a memory allocation, needs to be informed of asynchronous events and errors, may need to synchronise with the hardware to perform I/O, may be blocked or runnable, may have open files and so on. The students could deduce all of these things for themselves.
- Consider the interface of a process management module. Discuss the operations that one would wish to invoke on a single process: create, delete, start (put in scheduling queues), stop (remove from scheduling queues), block, unblock, set priority, dispatch, remove from a processor and so on. Discuss any operations that involve all processes, such as schedule.
- A concrete example using the instructions of a machine well-known to the students could be used to show how a process’s state is set up in the hardware and how control is passed to it.
- When threads are introduced, rework the process implementation sections for threads. Discuss what the OS does when switching between threads of the same process and between threads of different processes.
Chapter 4 continued
Support for processes in language systems (Sections 4.12 through 4.16) and integration of language system, operating system support (Section 4.17).
Objectives
To show how concurrency is supported at the language level. To show the possible relationships between language level processes and operating system processes.
Points to emphasise
- Those aspects of the state of a process that are of interest to the OS, those that are of interest to the language-level support.
- The differences between and similarities of co-routines and processes.
When it would be appropriate to use each.
- Even though your language lets you write concurrent processes, they may not be able to run in parallel or respond to asynchronous events.
- In order to understand how a language level concurrent program behaves, you must know the process model of the underlying OS (the OS may not be able to know about more than one process per program) and whether system calls are synchronous (potentially blocking) or asynchronous.
- If the OS supports multi-threaded processes the processes you write in your program may be supported by the OS and scheduled independently. If scheduling is pre-emptive, a thread may be scheduled immediately an event occurs. You must be able to assign priorities to your threads to arrange for this to happen.
Possible difficulties
Some students may not have experience of a concurrent programming language or an operating system which supports dynamic process creation.
Students tend to find it difficult to see the differences between co-routines and processes.
Teaching hints
- Useful prerequisite material is PL6.
- A laboratory session showing a language with support for concurrency would be useful at this stage, before IPC and related problems are discussed. The idea of being able to specify independent threads of control and have them executed in parallel should be reinforced.
- An exercise showing dynamic creation of UNIX processes could be used to reinforce the OS, separate-address-space-per-process model. For example, set up three commands with a pipe between each pair and discuss what is happening.
- Emphasise what you can and can’t do with a concurrent language running on an OS with whatever is available locally.