Threads and Processes -kernel and user space

Threads are the virtual CPUs, and if they aren't properly encapsulated, proof!, there goes your cross-architectural application support! The response to this statement depends a lot on how much migration threads are allowed. If you have a distributed shared memory system on a homogeneous distributed system and you use thread migration as a load balancing tool, you have a considerably different set of tradeoffs than if you have client/server distribution or heterogeneity. If some of the independent computers which make of the collection are shared memory multiprocessors and threads are used to allow concurrent execution within a task running on those computers, the tradeoffs are different than if all of the independent computers are uniprocessors. In the case of threads which do not migrate, it may be observed that many hardware implementations demand that premptive scheduling occur on the supervisor side of the user/supervisor boundary -- in what we normally refer to as 'the kernel' -- there must be some awareness of threads in ;SPM_quot;the kernel;SPM_quot; to allow pre-emptive scheduling of competing tasks. Since hardware implementations often require that the cost of crossing the user/supervisor boundary is higher than the cost of a procedure call, efficiency dictates that many thread operations be implemented at the user level. However, such efficiency implementations lead to the need to schedule threads at the user level, and so if threads are visible at the user level, then kernel scheduling and user level scheduling of threads can compete, causing such undesirable behaviour as thread blocking and pre-emption of threads which are in critical regions. The conclusion than is that efficiency concerns lead to a desire to place threads in the user address space, but that scheduling requirements require that the kernel have some knowledge of thread scheduling, at least for pre-emption. The answer is that the kernel and the user level thread scheduler must cooperate to minimize the interference between the two levels of scheduling. The best way to accomplish this is currently being debated. One school of thought believes that kernel participation be minimized as much as possible, leaving most decisions to the user level scheduler. The other says leave the kernel scheduler in place, but cause it to inform the user level scheduler of certain operations, such as block, unblock and pre-emption. Better: have a decent process model and avoid this process/thread dichotomy.