The Vital Signs project is an ongoing project in the Rainbow Group of the Cambridge University Computer Laboratory. The project has been running since October 1998. It is funded by the Human Factors programme of the Engineering and Physical Sciences Research Council, originally under EPSRC grant GR/M16924 'New paradigms for visual interaction', and now under grant GR/R87482 'Cognitive ergonomics for ubiquitous computing'.
The following text was commissioned for a recent magazine article - it was published in I3 Magazine (March 2001, number 11, pages 10-13). It aims to give a general introduction to my work for the non-specialist reader.
Alan F. Blackwell
Computer Laboratory, Cambridge University
As more and more of the things in our homes contain microprocessors, they increasingly offer abstract functionality. What do we mean by "abstract" in this context? Put simply, abstractions are things that we can't see or touch. But when a computerised appliance contains a user interface, the user is expected to interact with abstractions - by means of what he or she sees and touches in the interface. To this end, the interface includes representations of abstraction, or signs. Users see and manipulate those signs in a systematic way. If they become sufficiently familiar with the conventions, they may even think of the sign as the object, forgetting that they are only manipulating signs - abstractions themselves can never be touchable.
Two research issues arise from this. The first addresses the status of abstract behaviour: how can the user interface of an appliance (let's say a MiniDisc player) include things we can't touch? Briefly, this can occur either because the user is asking the appliance to do something in the future (like recording a radio programme - as it is in the future, we can't interact with it now), which we call an abstraction over time; or alternatively because the user is referring to a number of entities (like a playlist of music to be played at a party) which we call an abstraction over a class of entities. The second research issue addresses a universal type of human artefact: what are the general characteristics of systems of signs we can interact with? We call these notational systems. They include a notation (a visible representation of the abstractions) and an environment for manipulating that notation.
The characteristics of notational systems are more often found in our offices than at home - at least for the present! Every office worker is expected to interpret, manipulate or create specialised abstract notations related to their work: timetables, flow diagrams, decision trees, work rosters and many more. If we use computers in our work, almost every type of software application includes notations that allow us to create and manipulate complex and powerful abstractions. The spreadsheet notation, for example, is almost as powerful as a general purpose programming language - which is probably the most challenging abstraction handling notation that any human ever has to deal with. But it's not just professional programmers or spreadsheet users that face challenges in computer abstraction. Someone working with a word processor might regularly create abstractions over a class (e.g. defining a search and replace operation, or a paragraph style) or abstractions over time (e.g. a repeating keyboard macro).
The Vital Signs project at the Cambridge University Computer Laboratory addresses the needs of people using these kinds of system - not professional programmers (though they too might benefit from usability improvements), but home owners and office workers who must deal with notational systems to help them define abstract functionality in their environment. The starting point of the project has been to address two common misconceptions that occur in usability research: misconceptions about the role of metaphor in direct manipulation, and misconceptions about the needs and abilities of non-technical users.
First, metaphor. Since David Canfield Smith's work in the 1970's it has been assumed that metaphor in user interfaces (the desktop, for example), is primarily beneficial because it replaces abstraction (which is difficult for users) with depictions of physical objects that behave in a way structurally related to the metaphor. Three years of research in Cambridge has demonstrated that although it is very beneficial for users to be able to directly manipulate signs representing computer abstractions, there is no benefit beyond a simple mnemonic one from signs that resemble physical objects. Most users are able to recognise and process abstractions, and strained physical metaphors can even make their job harder. The Vital Signs logo shows an abstract figure "kicking the bucket" - actually the bucket is the original wastebasket from the Macintosh desktop, and symbolises the occasionally strained physical metaphors of current generation interfaces.
The second observation is related to this situation. Professional computer users (especially programmers) are very experienced at creating and manipulating abstractions. Is it asking too much for regular office workers (let alone homeowners) to attempt such challenging intellectual activities? A study carried out as part of the Vital Signs project addressed this question directly. We surveyed the office contents of secretarial workers who were not confident computer users, and compared them to the offices of computer science researchers. We found that the secretarial workers were always creating and maintaining abstractions: folders, drawers, cabinets, file boxes and binders. Computer scientists also use such abstractions - the two groups were perfectly comparable. But in the computer environment, computer scientists were likely to create at least one computer abstraction (directories, folders, macros and style) for every paper abstraction in their office. Office workers were incredibly reluctant to invest effort in computer abstractions. It's not that office workers are unable to deal with abstract notations - it's just that computers don't serve their needs as well as paper does.
We have modeled this phenomenon by adapting a well-known model of the psychology of investment decisions (by Kahneman and Tversky). Our system users are not investing money (as in Kahneman and Tversky's work), but they are investing their time and attention in creating abstractions. This is true whether the abstraction is a series of labels in a filing cabinet drawer, a word processor macro, a programmed recording time in a MiniDisc player, or a complex software application. In each case the abstraction developer invests some attention in programming work that could otherwise be spent in achieving the job itself. Furthermore, the abstraction route is risky. It may not work, it may work partially (a "bug"), or it may turn out to be inappropriate to what is actually needed when the time comes to execute it. This risk of failure is just like the risk of gambling losses in the situations investigated by Kahneman and Tversky. We have created a cognitive simulation of this phenomenon in which a simulated agent makes investment decisions about a simple abstraction - whether or not to invoke a search and replace dialog in a word processor.
The results of these studies are now informing the second phase of the Vital Signs project. The name Vital Signs refers to the recognition that our user interface elements are abstract signs, not metaphors. But it is still essential to retain the advantages of direct manipulation by making those signs "vital". They should respond immediately to user manipulation, and their effects in the abstract task domain should be directly communicated to the user. In the second phase of Vital Signs, we are implementing two experimental notations for generating abstractions: one for the office environment, and the other in the home environment.
Media Cubes concept
Our office project is entitled See What You Need (SWYN). It is intended to supplement word processors with the powerful kinds of abstraction provided in (for example) Microsoft Word's Visual Basic. Rather than taking Visual Basic as a functional objective, we consider the Perl language, which is often used by computer professionals for small-scale or casual programming tasks. Perl is even less suited than Visual Basic for use by non-programmers, so we are completely replacing the notation with one that shows users only the abstractions that they need. These are expressed in transparent overlays superimposed on the word processor window so that abstract structures can be seen and manipulated at the same time as the data that they will affect. These abstract notations have been experimentally verified in usability trials with non-programmers before implementation started. We believe that this is an essential precaution in this kind of research, but one that is unfortunately all too rare. Within the context of Vital Signs, SWYN demonstrates the combined advantages of reducing the cost of abstraction investment (through programming by example techniques) and simultaneously reducing the risk of abstraction failure (through direct simulation of effects in the work domain).
Our home environment project takes place in the context of the Computer Laboratory AutoHAN programme for automated home area networking. We have developed a hardware implementation of the ATM networking protocol that was installed in a test house several years ago. Since then we have been working on distributed computing architectures and user interface technology for interacting with networked devices in the home context. Usability has been a relatively unsuccessful aspect of past home networking research around the world, despite a great deal of investment in home networking and control technologies. Our approach is to adapt the familiar remote control, both simplifying its interface, and extending its capabilities. We have created a range of remote controls which each have only a single button, and a single conceptual function. These "Media Cubes" can be dynamically associated with any appliance in the house, however. If a cube with the "play/pause" function is associated with a CD player, pressing the button will play and pause the CD player. Moreover, the cube can be associated with any appliance in order to provide the same remote control function.
Media Cubes operational prototype
So far these are direct manipulation functions: cubes can be directly associated with appliances by placing them against an appliance. The associated appliance behaviour can be defined directly using the appliance's own controls. But the abstraction power of the cubes lies in the fact that they communicate with each other (through induction loops in their faces). More complex combinations of behaviour can be defined by placing sets of cubes together and recording their configurations. The physical cubes then become a notational system of their own - not a pencil and paper notation, or even a notation on a direct manipulation GUI screen, but tangible signs that can be touched, grasped and moved in literal direct manipulation of abstract definitions.
The Vital Signs project is funded by the Engineering and Physical Sciences Research Council under EPSRC grant GR/M16924 "New paradigms for visual interaction." The goal of the project has been to radically re-think the nature of abstract interaction, providing abstract facilities that directly address the needs of users. This is not restricted to the rather limited notion of "End-User Programming", which appears to emphasise the corporate objective of enabling unqualified programmers to undertake software development projects. Instead, Vital Signs concentrates on the abstract characteristics of typical domestic and office tasks. Many of these are informally described as "programming" - we say that we program our VCRs, and the timers on our boiler controls. But neither professional programmers nor computer science researchers are in the habit of accepting such activities as resembling their own far more complex occupations. Vital Signs addresses these most basic types of programming, demonstrating that it is possible to work from a coherent cognitive perspective and theoretical critique to the development of practical design tools and prototypes. Further work continues to take this agenda seriously, as we undertake studies of central heating programmers and mobile communications devices. More information about the Vital Signs project, and related research by the author, can be found at the URLs quoted.
Blackwell, A.F. (2001). SWYN: A visual representation for regular expressions. In H. Lieberman (Ed.), Your wish is my command: Giving users the power to instruct their software. Morgan Kauffman, pp. 245-270.
Blackwell, A.F. & Green, T.R.G. (1999). Investment of attention as an analytic approach to Cognitive Dimensions. In T. Green, R. Abdullah & P. Brna (Eds.) Collected Papers of the 11th Annual Workshop of the Psychology of Programming Interest Group (PPIG-11), pp. 24-35.
Blackwell, A.F. & Green, T.R.G. (1999). Does metaphor increase visual language usability? In Proceedings 1999 IEEE Symposium on Visual Languages VL'99, pp. 246-253.
Blackwell, A.F. and Hague, R. (in press). Designing a Programming Language for Home Automation. To appear in Proceedings of the 13th Annual Workshop of the Psychology of Programming Interest Group (PPIG 2001).
Kahneman, D. & Tversky, A. (1979). Prospect theory: an analysis of decision under risk. Econometrica 47(2), 263-291.
This project addresses two questions that have been inhibiting the development of novel graphical user interfaces for more than a decade. First, the main thrust of formal HCI research has been toward evaluative rather than generative theories, and hence offers little advice for the discovery of new interaction paradigms. Second, the design of new graphical paradigms starts from the assumption that the application of metaphor is central to usability. This is seldom questioned, but recent research suggests that it may be misleading. A new approach to both of these questions is offered by recent research in thinking with diagrams (Blackwell, ed., 1997) and reasoning with diagrammatic representations (Glasgow et. al., 1995).
To some extent, the paucity of novel interaction paradigms proposed in recent years results from a period of commercial consolidation. Much research now assumes the paradigm initiated in the Xerox Star and exemplified by the Apple Macintosh. It interprets the ideal of direct manipulation (Shneiderman 1983) in terms of desktop icons, scrollable windows, pull-down menus, and command buttons. A disadvantage of this consolidation, however, has been the extent to which it restricts both access and innovation. This style of interface is scarcely usable by populations such as the visually impaired (Strothotte & Strothotte 1997) or those with motion control deficits (Keates et. al. 1997). It also provides little opportunity for the application of innovative user interface devices such as the Cambridge autostereo display (Dodgson & Lang 1993) and video user interfaces (Stafford-Fraser & Robinson 1996) previously developed in the Rainbow Group.
Usability evaluation methods have also assumed the precedence of this interface paradigm, and hence give little guidance regarding support for special user populations or novel devices. The ISO standard ISO-9241, for example, devotes whole parts to form-filling and menu dialogues. Methods such as GOMS (Wharton et. al. 1994), heuristic evaluation (Nielsen 1993), cognitive walkthrough (Kieras 1994) and cognitive dimensions (Green & Petre 1996) allow for generalisation to novel kinds of interface, but only cognitive dimensions provides specific advice for the designer trying to find new positions in the potential design space.
The major factor inhibiting the development of new interaction paradigms, however, is the fact that computer scientists often rely on untested folk psychology when designing graphical notations (Blackwell 1996b). The intuition that graphical interfaces rely on metaphor is perhaps the most pervasive of these borrowings from folk psychology (Smith 1977, Blackwell 1996b, page 243). It is true that the use of metaphor has been shown to have benefits in an instructional setting (Spiro et. al. 1986), and even in a computational setting (Mayer 1975), but in those studies metaphors are presented verbally. The notion of a graphical metaphor is a recent one. Where it assumes that graphical metaphor assists learners to understand abstraction, it may even be a wrong notion (Pimm 1995).
In a recently completed project, Blackwell (1998) has carried out an extensive evaluation of the benefits of using visual metaphor as an instructional device in the use of graphical representations. In a series of controlled experiments, he has found no evidence for benefit of visual metaphor in learning to use a new notation, memory for symbols, facilitation of problem solving, or productivity in descriptive tasks. Even more worrying, he has found evidence that visual metaphors can inhibit users from addressing abstract tasks. These findings are in direct contrast with the assumptions of the HCI community, where it is more often claimed that graphical metaphors should assist users with handling abstraction - the reasons for this are discussed by Simos & Blackwell (1998). Moreover, such claims are seldom supported by experimental evidence. When they are, the experiments often compare text to graphics, but without testing the assumption that any advantage in the graphical representation must stem from metaphor (e.g. Lin et. al. 1997, p. 273).
It is important at this stage to clarify a point that often causes unnecessary scepticism about these observations. We have no doubt that graphical user interfaces have brought huge benefits, during a period when computers are being used by a more diverse population than ever before. What we are questioning is not the benefit of graphical user interfaces, but the attribution of those benefits to the use of metaphor. It is our contention that it may be unwise to continue with large projects predicated on the advantages of metaphor (e.g. Smyth, Anderson & Alty 1995) without further evidence of the specific cognitive effects of using metaphor in graphical representations.
There are some promising points of departure for a new approach. They include large-scale questionnaire studies of experienced users of supposedly metaphorical notations (Whitley and Blackwell 1997), detailed interviews with individual users regarding their own intuitions of natural representations (Petre and Blackwell 1997) and unifying reviews of the literature in cognitive psychology that has investigated specific cognitive properties of graphical representations (Blackwell 1997).
These and other experimental investigations into the perception of diagrammatic representations suggest that even basic cognitive properties of the icon / window / menu paradigm may be misunderstood by researchers and practitioners. It is commonplace to find popular misunderstanding of menus (e.g. "menus as interaction objects follow the 'restaurant' metaphor" - Akoumianakis & Stephanis 1997) or of pushbuttons (commonly understood in terms of Gibsonian perceptual "affordances" in the design literature (Norman 1988) rather than the more appropriate "affectances" of Ittelson (1996)). The project proposed in the following section aims to address this theoretical deficiency with a new framework for graphical interface design based on cognitive research into diagram use.
This proposal is grounded in three specific user contexts, each of which demonstrates some aspect of the deficiencies arising from the metaphor assumption in user interface design. These user contexts also provide the basis for defining the potential beneficiaries of this research later in the proposal.
For further information about the Vital Signs project, please contact
Telephone: +44 1223 334418
Facsimile: +44 1223 334678
New Museums Site
Cambridge CB2 3QG
Akoumianakis, D. & Stephanis, C. (1997). Supporting user-adapted interface design: The USE-IT system. Interacting with Computers, 9(1), 73-104.
Akscyn, R.M., McCracken, D.L. & Yoder, E.A. (1988). KMS: A distributed hypermedia system for managing knowledge in organisations. Proceedings of the ACM, 13, 820-835.
Blackwell, A.F. (1998). Metaphor in Diagrams. Thesis submitted for PhD degree, Cambridge University.
Blackwell, A.F. (1996a). Metaphor or Analogy: How Should We See Programming Abstractions? In P. Vanneste, K. Bertels, B. De Decker & J.-M. Jaques (Eds.), Proceedings of the 8th Annual Workshop of the Psychology of Programming Interest Group, pp. 105-113.
Blackwell, A.F. (1996b). Metacognitive Theories of Visual Programming: What do we think we are doing? In Proceedings IEEE Symposium on Visual Languages, pp. 240-246.
Blackwell, A.F. (1996d). Chasing the Intuition of an Industry: Can Pictures Really Help Us Think? In M. Ireland (Ed.), Proceedings of the first Psychology of Programming Interest Group Postgraduate Student Workshop, pp. 13-24.
Blackwell, A.F. (1997a). Diagrams about Thoughts about Thoughts about Diagrams. In M. Anderson, (Ed.) Reasoning with Diagrammatic Representations II: Papers from the AAAI 1997 Fall Symposium. Technical Report FS-97-02. Menlo Park, California: AAAI Press, pp. 77-84.
Blackwell, A.F., Ed. (1997). Thinking with Diagrams Discussion Papers. First UK Workshop on Thinking with Diagrams, Portsmouth UK, January 1997.
Chandrasekaran, B., Glasgow J. & Narayanan H.,Eds. (1995). Diagrammatic Reasoning: Computational and Cognitive Perspectives on Problem Solving with Diagrams. MIT Press.
Chao, C.M. & Maeda, J. (1997). Concrete programming paradigm for kinetic typography. Proc. IEEE Workshop on Visual Languages, pp. 446-447.
Choi, S.S. & Marriott, K. (1995). Automatic construction of user interfaces from constraint multiset grammars. Technical Report 95/229, Department of Computer Science, Monash University. Victoria, Australia.
Cypher, A., Ed., (1993). Watch what I do: Programming by demonstration. MIT Press.
Dodgson N. A. & Lang, S. R. (1993). Visualisation on the Cambridge Autostereo Display (3DTV). Presented at 3D Visualisation in Engineering Research, Rutherford-Appleton Laboratory, Cambridge.
Freeman, E. & Gelernter, D. (1996). Lifestreams: A Storage Model for Personal Data. ACM SIGMOD Bulletin, March 1996.
Gentner, D. & Wolff, P. (1997). Alignment in the processing of metaphor. Journal of Memory and Language, 37(3), 331-355.
Green T.R.G. & Petre M. (1996). Usability analysis of visual programming environments: a 'cognitive dimensions' approach. Journal of Visual Languages and Computing 7, 131-174.
Green, T.R.G. and Blackwell, A.F. (1996b). Ironies of Abstraction. In Proceedings 3rd International Conference on Thinking. British Psychological Society.
Henderson, D.A. Jr. & Card, S.K. (1986). Rooms: The use of multiple workspaces to reduce space contention in a wondw-based graphical user interface. ACM Transactions on Graphics, 5(3), 211-241.
ISO 9241 (1996). Ergonomic Requirements for Office Work with Visual Display Terminals.
Ittelson, W.H. (1996). Visual perception of markings. Psychonomic Bulletin & Review 3:171-187.
Keates S, Potter R, Perricos C, Robinson P (1997) Gesture recognition - research and clinical perspectives. Proceedings of RESNA 97. 333-335.
Keates S, Robinson P (1997) User performance modelling and cognitive load. Proceedings of RESNA 97. 342-344.
Kieras,.D.E. (1994). A guide to GOMS task analysis. University of Michigan.
Kurlander, D. & Feiner, S. (1993). A history of editable graphical histories. In A. Cypher (Ed.), Watch what I do: Programming by demonstration. MIT Press.
Lewis, C. & Olson, G.M. (1987). Can Principles of Cognition Lower the Barriers to Programming? Empirical Studies of Programmers: Second Workshop. Ablex.
Lin, H.X., Choong, Y.-Y. & Salvendy, G. (1997). A proposed index of usability A method for comparing the relative usability of different software systems. Behaviour and Information Technology, 16, 267-278.
Mayer, R.E. (1975). Different problem-solving competencies established in learning computer programming with and without meaningful models. Journal of Educational Psychology, 67(6), 725-734.
Nielsen, J. (1993). Usability Engineering. Boston MA: Academic Press.
Norman, D.A. (1988). The design of everyday things. New York: Doubleday.
Pennington, N. (1987). Comprehension strategies in programming. In G.M. Olson, S. Sheppard & E. Soloway (Eds.), Empirical Studies of Programmers: Second Workshop. Norwood, NJ: Ablex.
Petre, M. and Blackwell, A.F. (1997). A glimpse of expert programmer's mental imagery. In S. Wiedenbeck & J. Scholtz (Eds.), Proceedings of the 7th Workshop on Empirical Studies of Programmers, pp. 109-123.
Pimm, D. (1995). Symbols and meanings in school mathematics. London: Routledge.
Robinson, P., Sheppard, D., Watts, R., Harding, R. & Lay, S. (1997). A Framework for Interacting with Paper. In D Fellner and L Szirmay-Kalos (Eds) Proceedings of Eurographics '97.
Shneiderman, B. (1983). Direct manipulation: A step beyond programming languages. IEEE Computer, 16(8), 57-69.
Shu, N.C. (1988). Visual Programming. New York: Van Nostrand Reinhold.
Simos, M. & Blackwell, A.F. (1998). Pruning the tree of trees: The evaluation of notations for domain modeling. In J. Domingue & P. Mulholland (Eds.), Proceedings of the 10th Annual Meeting of the Psychology of Programming Interest Group, pp. 92-99.
Smith, D.C. (1977). Pygmalion: A computer program to model and simulate creative thought. Boston, MA: Birkhäuser.
Smyth, M., Anderson, B. & Alty, J.L. (1995). Metaphor reflections and a tool for thought. In M.A.R. Kirby, A.J.Dix & J.E.Finlay (Eds.), People and Computers X (Proc HCI '95). Cambridge University Press.
Spiro, R.J., Feltovich, P.J., Coulson, R.L & Anderson, D.K. (1989). Multiple analogies for complex concepts: antidotes for analogy-induced misconception in advanced knowledge acquisition. In S. Vosniadou and A. Ortony (Eds.), Similarity and Analogical Reasoning. Cambridge University Press.
Stenning, K. & Oberlander, J. (1995). A cognitive theory of graphical and linguistic reasoning: logic and implementation. Cognitive Science 19:97-140
Stenning, K. & Oberlander, J. (1995). A cognitive theory of graphical and linguistic reasoning: logic and implementation. Cognitive Science 19:97-140.
Strothotte, C. & Strothotte, T. (1997). Seeing between the pixels: Pictures in interactive systems. Springer Verlag.
Suwa, M., Gero, J. & Purcell, T. (forthcoming). Analysis of cognitive processes of a designer as the foundation for support tools. Submitted to Artificial Intelligence in Design '98.
Watt, S. (1998). Syntonicity and the psychology of programming. In J. Domingue & P. Mulholland (Eds.), Proceedings of the Tenth Annual Meeting of the Psychology of Programming Interest Group, pp. 75-86.
Wharton, C., Rieman, J., Lewis, C. & Polson, P. (1994). The cognitive walkthrough method: A practitioner's guide. In J. Nielsen & R.L. Mack (Eds.), Usability Inspection Methods, pp. 105-140. NY: Wiley.
Whitley, K.N. and Blackwell, A.F. (1997). Visual programming: the outlook from academia and industry. In S. Wiedenbeck & J. Scholtz (Eds.), Proceedings of the 7th Workshop on Empirical Studies of Programmers, pp. 180-208.
Return to Vital Signs Home Page