Next: A Lesson from
Up: An Update on the
Previous: The Policy is
The feedback on the security policy, from both institutions and individuals,
has been roughly of three kinds. Firstly, the majority of responses have been
strongly supportive (e.g., [26]). A common comment has been that the
work brings clarity to a subject that many had for years found to be confusing,
and that while its principles may not all be achievable at once (or even at
all in some legacy systems), it shows where we should be going. At least one
medical school has dicussed incorporating the policy into its curriculum.
The second kind of response has come from officialdom and its sympathisers, who
emphasise `practical' objections to the policy. This became amusingly clear at
a meeting with officials on the 6th February at which a senior official claimed
that the principles would be impractical, as the notification requirements
would be too onerous. We informed him that we would be resolving this question
by conducting a trial at a number of general practices. He then said that
although the principles might work in general practice, they might be
impractical in a hospital setting. A clinician present asked whether he was
suggesting a trial in the context of GP-hospital links and he replied that that
would be an adequate trial. We promptly agreed and minuted the agreement. In a
later letter, he complained that this was still not wide enough to test the
principles' practicality [68].
The rest of the criticisms --- the interesting and useful kind --- are made up
of a large number of observations by various parties, but with a number of
recurring themes.
- A number of clinicians have argued that integrated hospital systems can
bring important safety benefits; they might help prevent the tragedies that can
happen when records go astray (as many paper records do [1]
[2]). The point is also made that at some hospitals, as many as 70%
of admissions are accident and emergency, so there is little scope for
compartmentation between clinical departments [63]. When one asks
advocates of integrated hospital systems how to control the aggregation threat
that arises when many hospital staff can see data on many patients, and which
will become much worse if hospitals are connected together into a network, the
suggestions include:
- forego NHS networking as insufficiently important;
- allow only a small number of trusted staff to copy records from one
hospital to another, and audit them closely;
- remove general access to records of patients who are not currently
receiving treatment. A typical acute hospital might have files on a million
people, but only a few percent might be active (as in- or out-patients) at any
one time. Only a small number of trusted library and admissions staff would
have the ability to restore a record to `active' status;
- our suggestion was to use a technology such as active badges
[73] to track hospital staff, and prevent (or investigate) accesses to
the records of patients in other departments or wards. It turns out that a
similar system is used in some US hospitals but based on departmental groups of
terminals. Staff who access another department's records face questioning and
possible disciplinary action [23];
- educate the public to change their expectations of medical privacy.
In any case, the practicality of securing hospital information systems is an
open question, with some contributors foreseeing serious problems [63]
and others not [35]. Resolving this will be an empirical matter, and
may involve some exceptions to the policy --- an issue which we will discuss
further below.
- One of the most trenchant criticisms came from a senior member of the
computer security community, Gus Simmons (who was for many years the senior
scientist at Sandia National Laboratories, whose responsibilities include the
security of the US nuclear arsenal). He argued that it is not adequate to
secure an electronic system to the same level as the paper system it replaces,
as critical social controls are removed.
With a paper records system, an attacker can always grab a file from someone
else's office, but this activity is counter to social taboos, and is fraught
with risk that the occupant might return unexpectedly. But when records are
placed on a computer, anyone who can get access through his terminal will not
appear to a passer-by be doing anything wrong. Thus he may feel that he is
committing at most a very minor misdemeanour. So electronic record keeping
systems should have very strong auditing and intrusion detection systems; a
deterrent that must be publicised and credible [71].
- As an intrusion detection mechanism, Simmons suggested that whenever
anyone looked at a patient's record but did not bill the patient for her time,
then it should be investigated as a prima facie abuse. This would harmonise
the patient's interest in privacy and the hospital management's interest in
maximising its revenue.
- Similar ideas were suggested independently by Ulrich Kohl [43].
His development is somewhat more general and shows that context-based access
controls can be implemented with with quite general parameters.
- On the other extreme, the policy has been criticised for not emphasising
that computerised medical records have the capability to be much more secure
than paper records [63]. We have never disputed this as a possibility
--- but have still to see a really secure electronic medical record system
fielded.
- A number of contributors worried about the extent to which access control
lists would have to be micromanaged, and whether this would turn out to be a
serious burden given the large number of record fragments that can pertain to
one individual [62]. In fact, given the signal-to-noise problems, might
it not turn out to be unfeasible?
Our view was that the great majority of individuals can be dealt with using a
default access control list, containing a group such as `all GPs working in the
practice', and that only a small number of highly sensitive records would
require exceptional treatment with an access control list containing only the
treating doctor and the patient himself. Nonetheless this was felt to be an
extremely important question, and in consequence was one of the points
investigated in a trial of the principles carried out in a number of general
practices. Some early results are described in the paper by Alan Hassey and
Mike Wells [36].
- A number of contributors objected to the restrictions on aggregating
patient data. A typical comment was ``There is no doubt that general aggregated
data, such as immunisation uptake, has been beneficial to the common good
... system linkage or networking has, I would suggest, been poorly planned and
perhaps somewhat hurried ... however I do feel that it is inevitable and that
the benefits will ultimately outweight the perceived pitfalls'' [60].
Several groups opined that with de-identified data it might be extremely
difficult to obtain information such as analysis of readmissions to hospitals
[22] [70]. This is a very important point, and one on which US
contributors also had much to say; we will deal with it fully in a later
section.
- Tom Rindfleisch made the point, with which we fully agree, that informed
consent should not be sought at the stressful point of critical need, but in
advance, like a living will [62]. A related point is made by the German
information security agency: that for consent to be meaningful, systems must be
designed so that people who refuse to use part or all of them, or to grant some
information access, do not lose their right to care as a result [10].
The German case referred to a health smartcard; it is unclear what would happen
if someone needing hospital treatment in the UK refused permission for their
personal health information to be entered on the Clearing system, and
discussions with officials have elicited only the vague suggestion that perhaps
the hospital would simply foot the bill for treatment itself ``as a one-off''.
- Some members of the computer security community objected to principle 9
(the Trusted Computing Base), on the grounds that it is a part of the security
engineer's basic intellectual environment. However, the BMA policy talks to
clinicians as well as technicians, so we feel it is appropriate. No matter how
the document is written there will be parts that some section of the audience
feels to be superfluous.
- Some writers preferred `fuzzier' statements of the security policy goals
and want it to be more `patient centred' [61]. We remain unmoved. A
security policy is like a scalpel: it must be clean and sharp rather than warm
and furry. As for the buzzword `patient centred', systems so described often
seem to be a cover for transferring the primary record from the GP to a health
authority, a hospital or an insurance company. We are satisfied to have upheld
the principle of patient control.
- A number of computer companies complained that the security functionality
required was so different from that offered by their current products that
expensive redevelopment would be necessary [11]; while the Association
of the British Pharmaceutical Industry asked for some of the principles to be
made less `draconian' [74].
- We received quite a lot of input on practical solutions used elsewhere,
e.g. German cancer registries [12], the New Zealand registry system
[58], and similar registries registries implemented in Denmark
[45] and proposed in Norway [13]. The point was also made ---
from experience with HIV programmes in the USA --- that apart from neonates,
the date of birth is clinically irrelevant and should be suppressed in clinical
systems, thereby reducing the likelihood of harmful linkages being constructed
with other systems at some later date [15].
We noted above that some exceptions to the policy may have to be made, e.g. for
accident and emergency staff. This does not of course invalidate the
policy. Even policies such as Bell LaPadula and Clark-Wilson fail to cover
their application areas completely. In a bank, for example, there are typically
about twenty roles which cannot realistically be subjected to dual control,
such as the chief executive, the chief systems programmer, the computer
security manager and the chief dealer. Such people simply have to be trusted,
despite the fact that the trust occasionally turns out to be misplaced.
This is well understood in the security comunity. The security policy sets a
yardstick; system builders get as close to it as they economically can; the
shortcomings are examined during the evaluation process; and so when the system
is presented by the contractor to the customer, he can make an informed
decision on whether to accept the residual risk or send the system back for
redevelopment. The policy does not eliminate residual risk, but rather
quantifies it and enables a prudent judgment to be made about it.
That kind of benefit should materialise once people start using the policy to
build systems. Meantime the main benefit is clarity. The policy has enabled us
to work through the logical consequences of the GMC's ethical principle ---
that patients should have control over access --- in much greater detail than
ever before, and apply it as a test to many fielded and proposed systems.
Previously, discussions had tended to set a rather poorly defined `patient
confidentiality' against an equally poorly defined `public interest' that was
often described vaguely in terms of research benefits but was all too often a
front for attempts to increase official power and control. However the policy,
and its followup in the GP pilot, brought us to identify the tension between
privacy and safety as the best way to express the trade-offs from the patient's
point of view.
Leaflets distributed as part of the GP pilot reflect this, and the GP pilot
also enabled us to identify the flows of information from general practice to
health authorities, for such purposes as item-of-service claims and cervical
screening, as one of the few problems with implementing the policy and
guidelines in general practice [42] [36].
It also brought to our attention that there are potentially major problems with
de-identification of the data in statistical databases. However, before we
explore this, it is appropriate to mention the feedback received from
conferences in the USA.
Next: A Lesson from
Up: An Update on the
Previous: The Policy is
Ross Anderson
Tue Jun 25 08:31:53 BST 1996