Finally, we must ensure that the security mechanisms are effective in practice as well as in theory. This leads to issues of evaluation and accreditation.
In computer security terminology, the `trusted computing base' is the set of all hardware, software and procedural components that enforce the security policy. This means that in order to break security, an attacker must subvert one or more of them.
At this point we will clarify what we mean by `trust'. In the commonplace use of language, when we say that we trust someone we mean that we rely on that person to do --- or not to do --- certain things. For example, a patient when sharing confidential information with a clinician expects that this information will not be shared with third parties without his consent and relies on this expectation being fulfilled.
A way of looking at such relationships, that has been found to be valuable in system design, is that a trusted component is one which can break security. Thus a clinician who has obtained confidential information from a patient is now in a position to harm him by revealing it, and he depends on her not to. There will be parts of any computer system on which we similarly depend. If they are subverted, or contain bugs, then the security policy can be circumvented.
The trusted computing base of a clinical information system may include computer security mechanisms to enforce user authentication and access control, communications security mechanisms to restrict access to information in transit across a network, statistical security mechanisms to ensure that records used in research and audit do not possess sufficient residual information for patients to be identified, and availability mechanisms such as backup procedures to ensure that records are not deleted by fire or theft.
The detailed design of these mechanisms is discussed in the next section. For now, we will remark that it is not sufficient to rely on the assurances of equipment salesmen that their products are `secure' --- these claims must be checked by a competent third party.
Principle 9: Computer systems that handle personal health information shall have a subsystem that enforces the above principles in an effective way. Its effectiveness shall be subject to evaluation by independent experts.
The need for independent evaluation is shown by long experience, and there is now a European scheme, ITSEC [EU91], under which national computer security agencies (in Britain's case CESG/GCHQ) license commercial laboratories to carry out security evaluations. Independent evaluation is also a requirement in other countries such as Australia [Aus95], Canada [TCP93] and the USA [TCS85]. As schemes such as ITSEC are oriented towards military systems and evaluations under them may be expensive, some industries run their own approved schemes. For example, the security of burglar alarm signaling is evaluated by the underwriters' laboratories of the Loss Prevention Council. Similar industry-wide arrangements may in due course be made for clinical systems.