SENSORY RESILIENCE BASED ON SYNESTHESIA

Abstract

Situated cognition depends on accessing environmental state through sensors. Engineering and cost constraints usually lead to limited "pathways" where, for example, a vision sub-system only includes a camera and the software to deal with it. This traditional and rational design style entails any hardware defect on the pathway causes the system to grind to a halt until repair. We propose a "sensoriplexer" as neural component architecture to address this issue, under the common scenario of multiple sensors availability. This component architecture learns to mix and relate pathways, such that an agent facing failure in a sensory sub-system can degrade gracefully and coherently by relying on its other sub-systems. The architecture is inspired by the concept of synesthesia, and relies on statistical coupling between sensor signals. We show the benefit and limitation of the architecture on a simple shape recognition and a more complex emotion recognition scenarios.

1. INTRODUCTION

Situated agents embody various degrees of "SPA loops", where they Sense-Process-Act repeatedly in their environment. The simplest agent senses only one dimension in the environment. Sensing failure prevents it from processing and acting-basically causing it to halt. More complex agents with multiple sensors can sometimes continue operating despite the loss of some sensors, but they cannot usually perform anymore the functions related to the lost sensors. Sensor redundancyfoot_0 is a common solution to ensure continuing operations. Flagship projects like space rovers introduce redundant sensors, such as doubling all "hazcam" on NASA's Curiosity or CNSA's Yutu. Yet engineering and cost constraints block this option in many systems. The loss of most non-critical sensors often means the agent enters a "degraded" mode of operations. Biological agents can compensate to some extent the loss of a sensor, by using the other ones they are endowed with. A blind person can rely on touch to "read" visual cues encoded to Braille, or "hear" a speech signed visually. Impressive technology allows learning to feel sounds on the skin Bach-y Rita (1972), and by extension for deaf people to "listen" to a conversation by translating sound into tactile patterns on their skin Novich & Eagleman (2015) . The compensation capability appears to rely on brain mechanisms that relate different sensory inputs of the same object. These brain mechanisms also appear common to all, with different degree of expression Cytowic (2018). For example, we perceive an apple via (mainly) vision, touch and smell. We can relate the vision of an apple to its likely smell or touch. Although the brain mechanisms are not entirely identified at the present time, the effect has been named synesthesia, sometimes ideaesthesia Jürgens & Nikolić (2012); Nikolić (2014). This article proposes the sensoriplexer component architecture to model synesthesia-inspired mechanisms in artificial agents. The sensoriplexer (SP) allows an agent learning relations between its "senses" for exploitation in its downstream activities. After presenting work related to resilience and their limits, we present a formal model of SP and a corresponding implementationfoot_1 . We use the implementation to conduct a series of experiments to demonstrate and evaluate the capabilities of systems including SP. The article ends with a discussion of the results, and future work.



Informally, redundancy means here setting multiple sensors on the same environmental object. Two close-by front cameras on a robot are redundant, but a front and rear camera pair is not (they are complementary). Available in the supplementary materials, and on GitLab (private for now)

