The ease with which a robot representation can be interpreted by humans is particularly important when the robots are dependent on humans to do some element of the reasoning involved in a task. This is not always considered in discussions of artificial intelligence for robots, especially where hypothetical autonomous robots (often anthropomorphic, and named ``Robbie'') solve problems that are normally only encountered by humans.
The majority of today's robots do not however ``reason'' about their workspace in any sense that we would recognise - they simply follow a prescribed sequence of motions (in some cases they may also react to unexpected events). A more realistic goal than the autonomous ``Robbie'' is task-level programming (which is actually a proposed level of cooperation in reasoning between a human and a robot) but even task-level programming requires a wide range of reasoning ability. The reasoning requirements of task-level programming include acquiring a description of the task from the programmer, interpreting information about the workspace from sensory information, planning actions, and using knowledge gained from previous tasks.
Spatial representations for robots developed to date have generally been specifically aimed at one of these functions. Programming representations have been developed from computer aided design methods or from general purpose computer languages, representations of sensory data have been designed specifically for particular sensors or object matching tasks, and motion planning systems have used ad hoc representations in order to apply particular algorithms for geometric collision avoidance or pathfinding.
The PDO/EPB representation may be well suited for use in task-level programming systems, for three reasons. Firstly, it describes things in a way that seem natural to a programmer - the ``move towards'' or ``move forward and to the left'' operations are the way that we naturally describe qualitative motion. Likewise, the description of shape seems natural to people - consider the following extended polygon boundary description of a light bulb: ``Most of this shape is a circular curve, turning through about three-quarters of a circle. Each end of the curve extends into a wiggly section; the wiggles are parallel to each other. The last side is a complicated convex shape. It curves inward on each side, and curves outward in the middle''. This description is easy for a human to construct, and is sufficient for qualitative spatial reasoning.
Secondly, the PDO/EPB representation can be used to reason about motion down to the level of individual robot movements. This is the range over which a unified representation is needed by systems such as Lozano-Perez's LAMA system. The robot movements are controlled according to ``motion strategies'' (INSERT, for example, which defines a strategy for inserting a cylinder into a round hole). The high level reasoning system need not have any information regarding the low level strategies, other than what effect they have. This is analagous to human motion - we plan our actions qualitatively, but individual motions use local feedback information, independent of that high level reasoning.
Thirdly, it would be possible to construct a PDO/EPB description directly from sensory data, so that a robot could update its internal representation of the world during performance of a task. The EPB shape representation is very similar to representations output by vision systems such as Mackerras' [Mac87b], and the proximity ordering transform can be carried out directly on a stored image after edge filtering and polygon segment identification.
The level of complexity in the qualitative PDO/EPB representation is therefore appropriate to the kind of reasoning problems that arise when a human must instruct a robot at the task level - that is, in human-like terms. The intuitive nature of the PDO/EPB representation is therefore as significant for robot applications as the advantages of graceful degradation, and of operation with incomplete data during reasoning.