Trust in a Secure System

If the point of security in a distributed system is to protect resources against both willful and accidental misuse then there will need to be some enforcement. If there is any possibility of some entity obtaining resources it is not entitled to then there needs to be protection within the system. This protection can be provided by a number of different techniques, a few of which are described later in this chapter. However, underpinning all of the protection techniques is a matrix of trust. Various components supporting the security system have to be trusted to carry out a particular function. Which components are trusted and what they are trusted for will depend on the what the security requirements are and the design of the actual system. In a very secure system the number of trusted components is kept very small and they are built very carefully. Unfortunately such systems are expensive and involve a considerable overhead. Many commercial systems compromise by having more trusted components; which is potentially less secure, but requires a lower overhead. In the discussion above, on authorization, we assumed that there was a trusted component which would enforce the access control decision made by the authorization function. The system is also trusting the authorization function to make a proper decision in line with the prevailing security policy. In discussing security for distributed systems it is not possible to say much more about trust without describing a specific system. When designing a distributed system the designer must make a conscious decision about which parts will be trusted, and with what functionality. There needs to be a clear distinction between the trusted parts and the untrusted parts so that the proper mechanisms can be used to keep them apart. For instance the boundary between a user process and the operating system on a conventional computer is protected by hardware mechanisms that prevent the user's process from accessing the operating system through any other path than that allowed for by the designer. The designer must install a boundary, decide which parts of the system are within the boundary and need to be trusted, and consequently which parts are not to be trusted and must lie outside the boundary. The boundary will need to be protected by mechanisms; the strength of which will determine the strength of a lot of the system security. Where components cannot, or should not, be trusted with some function they may still require to use or invoke some security facility. In these cases special techniques can be applied, usually based on cryptography, to support the use of security. For instance, in the above description on capabilities in access control the problem of an object acquiring a capability it is not entitled to was described. This can be overcome by having the function which issues capabilities include the identity of the object to which it is issued in the capability. Og course, the object trying to use the capability to be authenticated to make sure it is not faking its object identity (assuming somebody else's id who is entitled to that capability.) The capability is then sealed, in some way, so that the receiving object cannot change it. Then the access control function can check that the capability is being used along with the correct object identifier; it can also check the capability to see if it has been modified. The objects in the system are not being trusted to look after the capabilities, but they can still hold them and use them. However, using a cryptographic seal means that some cryptographic keys have to be distributed, and that requires some further trusted components. The problem of key distribution is discussed in section 4.7 on cryptography. Trust is in fact a rather subtle idea. When you receive a piece of information, there is a great deal of context which causes you to trust it. Whether it is from a trustworthy source, via a trustworthy channel is not all that is needed. At least you need to know that the source is not joking.