The main purpose of comsec, or communications security, measures is to ensure that access controls are not circumvented when a record is transmitted from one computer to another. This might happen, for example, if clear data are transmitted to a system which corrupts its access control list, or which does not enforce the principle of informed consent. It might also happen if clear data were intercepted by wiretapping, or if clinical information in an electronic mail message were sent by mistake to a mailing list or newsgroup.
The secondary purpose of comsec mechanisms is to protect the integrity of data in transit through a network. Some messages, such as pathology reports, are life critical; and there is also controversy on whether clear electronic records are adequate for legal purposes. It is therefore desirable in many applications to add an integrity check to messages.
Clinicians should not assume that a network can be trusted, unless it is under their direct control and enclosed in protected space, as may be the case for a local area network joining computers in a surgery. Wide area networks such as the Internet and the NHS wide network may not be trusted. Remember that for a network to be trusted is equivalent to saying that it can break system security. To expose patient confidences to a system component which is not under clinical control, or under the effective control of a trustworthy third party, is imprudent to the point of being unethical.
A convenient means of protecting information in a network is provided by cryptography. Modern cryptographic systems allow users to have separate keys for encryption and decryption, and the encryption key can be published while the decryption key is kept secret. Similarly, a user will have separate keys for signature and signature verification; the signature key will be kept secret while the signature verification key is published so that anyone may verify a signed message. A standard textbook on cryptography is Schneier [Sch95].
Digital signatures allow the creation of trust structures. For example, the General Medical Council might certify all doctors by signing their keys, and other clinical professionals could be similarly certified by their own regulatory bodies. This is the approach favoured by the government of France [AD94]. An alternative would be to build a web of trust from the ground up by users signing each others' keys. A half-way house between these two approaches might involve key certification by a senior clinician in each natural community.
All of these options possess strengths and weaknesses, and are the subject of current discussion. The centralisers' strongest argument appears to be that even if certification were substantially local, one would still need a central service for cross-domain traffic. They may also argue that this central service should be computerised, since if one merely had a key fingerprint next to each clinician's name in the appropriate professional register, it would not enable clinicians to verify signatures on enclosed objects.
However, a single certification authority would be a single point of failure, and electronic trust structures should also reflect the actual nature of trust and authority in the application area [Ros95]. In medicine, authority is hierarchical, but tends to be local and collegiate rather than centralised and bureaucratic. If this reality is not respected, then the management and security domains could get out of kilter, and one could end up with a security system which clinicians considered to be a central imposition rather than something trustworthy under professional ownership and control.
Most published key management and certification standards relate to banking, but clinical systems have additional requirements; one might for example want a count of the total number of patients' records accessed by the clinician outside her team during a certain period of time, and this might well be enforced through the certification mechanism.
In any case, once each clinician has acquired suitably certified key material, the integrity of access control lists and other information on a network can be enforced by means of a set of rules such as:
Careful consideration must be given to the circumstances in which acts of decryption and signature may be carried out. If the system can execute a signature without the signer's presence, then it may have no force in law [Wri91]. This ties in with the principle that when working cross-domain, records must be given rather than snatched; access requests should never be granted automatically but need a deliberate action by a clinician.
Comsec techniques may be applicable in more restricted applications. The guidelines issued with this document cover prudent practice for dialback protection of links to branch surgeries. Another example might be where a clinician wished to use a portable computer with a mobile telephone to view records while on night visits. Some mobile phones (particularly those using GSM) provide a level of security which may be acceptable, while others are easy to monitor. If an insecure medium is used then it would be prudent to protect the data by application level mechanisms such as encryption.
Encryption and dialback are not the only comsec options. Another is to make data anonymous, in those applications where this is straightforward. For example, a system for delivering laboratory reports to GPs might replace the patient's name with a one-time serial number containing enough redundancy to make accidental error unlikely. The test results might then be transmitted in clear (with suitable integrity checks).
The most important factor in getting security that works is not so much the choice of mechanisms but the care taken to ensure that they work well together to control the actual threats.