Chapter 6 Memory Management
Exercises
6-1 Outline the basic memory management functions of an operating system.
To hold information so as to be in a position to respond to client requests for memory. To set up the memory management hardware so that address translation can take place and to respond to memory management hardware events. To keep track of allocated and free main memory and backing store. To manage the transfers between main memory and backing store.
6-2 Describe how copies of instructions and data are held in the various levels of a memory hierarchy: permanent disk storage, backing store for main memory on disk, main memory, cache memory and processor registers. When are transfers of data made between the various levels? In each case, indicate whether the transfer involved is controlled by hardware or the operating system and explain the interface between the hardware and the operating system.
Backing store for main memory: this may be a separate partition of a disk or a separate disk, sometimes with higher performance than those used for persistent storage (e.g. fixed head). Traditionally, when a new program is to be loaded it is set up in this backing store ready to be paged in. Note that this arrangement is not suitable for memory mapped files.
Main memory: transfer in on an MMU exception (e.g. page fault). Transfer out if the page has changed and main memory is needed for another page of this or another process (note memory sizes are increasing). Done by OS.
Instruction and data cache: transfer-in takes place when the instruction or data is accessed during instruction execution. Transfer out is to make room for a transfer in. Controlled by hardware.
Processor registers: by hardware during instruction execution.
6-3 What is the virtual address space of a process? How large is the virtual address space for address lengths of 16, 24 and 32 bits? How does a separate address space per process protect processes from each other and the operating system from processes?
The range of memory locations that can be addressed directly.
16 bit address: 64Kbytes
24 bit address: 16Mbytes
32 bit address: 4Gbytes
The virtual address space of a process defines the addresses which are available to a process. If processes have separate address spaces they are unable to address each others memory for reading or writing. If the operating system is run in its own, separate address space no user level process can read or write OS memory.
6-4 Give examples of why processes might wish to share parts of their address spaces. What kind of memory management hardware would support this sharing? How can memory protection be enforced when sharing is allowed?
Code sharing (e.g. of utilities or libraries) or read-only data sharing is transparent to the processes concerned. It allows the system to economise on the use of physical memory by avoiding multiple copies. Segmentation hardware (or paging hardware with segmentation managed by the OS) is needed with write protection on the shared code or data.
Sharing of writeable data areas allows processes to co-operate efficiently. They may wish to share only a portion of their address spaces. Fine grained segmentation hardware (many segments per process) would support this. If only paging hardware is available the OS may allow the processes to declare shareable regions of their data spaces and set up the shared pages accordingly.
6-5 What are the advantages and disadvantages of running the operating system in the address space of every process? What are the advantages and disadvantages of running the operating system as a separate process with its own address space?
OS and user-level process in same address space: OS may be entered by procedure call with mode switch. No switch of memory management tables is needed on entry or exit. OS may access user memory conveniently, e.g. to transfer data on I/O. Must have memory protection on OS memory. The architecture may support this arrangement.
If the OS is a separate process it is invoked is a uniform way as for other processes. Data transfer is like IPC (may use memory mapping). Context switching is needed on call and return. Memory protection is automatic cf. inter-process. Matches a message passing rather than procedural invocation model.
6-6 Why do systems which use segmented memory management have to solve the problem of fragmentation of main memory? Why do segmentation and swapping go together?
Segments are variable in size. If we use segmentation without paging, memory is allocated in variable sized contiguous areas leading to fragmentation. Swapping out then in again is more efficient for consolidating free store than memory to memory transfer. It can also be combined with scheduling policies. All this becomes less important as memory becomes larger.
6-7 Is segmentation transparent to the application programmer and/or the language implementor? What service is segmentation hardware designed to provide?
The application is able to indicate the structure of its virtual address space (via the language system implementation to the OS) so that appropriate protection and sharing may be arranged. If many segments per process are available the application may need to be able to specify fine-grained sharing. Segmentation may be transparent to the application programmer and be managed by the language system implementation (e.g. shareable code, non-shareable data segments). It is not transparent to the language implementor.
Is paging transparent? What problem is paging hardware designed to solve?
Paging is transparent and is for efficient physical storage management.
6-8 How can a large data structure which is in the address space of one process be made shareable by some other process so that they both see the same copy of the data?
By making it a segment of both processes. System calls may be provided to achieve this (e.g. UNIX System V).
How can a large data structure be transferred from the address space of one process to that of another without copying the data?
The system may provide facilities to achieve this - see message passing in Chapter 13. If a transfer is required (as opposed to simultaneous sharing) the pages occupied by the data area could be mapped out of one process’s address space and into the other’s.
How can a copy of a large data structure which is in the address space of one process be given to another process? If it is unlikely that both processes will write to the same part of the data structure how could the system avoid giving them a copy each?
Again, the data structure may be sent as a message with an indication that sharing, not transfer, is required. The pages of the data structure may be set to have "copy on write" access rights for both processes, see Section 6.9.2.
6-9 How many entries would there be in a process page table in a system with 32-bit addresses and a 1K page size?
2*22, (4096K or over 4 million).
How might the processes page tables be organised to avoid a large number of page table entries for a sparsely populated virtual address space?
The language system and OS co-operate in the organisation of soft segmentation. Instead of a flat page table, the OS maintains a number of segment or region page tables.
It has been suggested that a main store page table (sometimes called an inverted page table) might be used, instead of a page table per process, for pages in main store. What are the advantages and disadvantages of this approach?
Physical memory is typically smaller than virtual memory. It is feasible to store a single main store page table (MSPT) in main memory whereas process page tables might need to be swapped out, for example when a process is blocked. The MSPT scales with the size of physical, not virtual, memory. The OS must record both main store and backing store locations of pages of processes by some means. A suggestion is a single main store page table and a backing store page table per process.
The disadvantage is how to organise sharing. How do you indicate that a given page is shared by a number of processes, possibly with different access rights. Note that an associative store is still used for address translation. Each sharer has a separate entry there and that is where access checking is carried out during instruction execution.