BORT: TOWARDS EXPLAINABLE NEURAL NETWORKS WITH BOUNDED ORTHOGONAL CONSTRAINT

Abstract

Deep learning has revolutionized human society, yet the black-box nature of deep neural networks hinders further application to reliability-demanding industries. In the attempt to unpack them, many works observe or impact internal variables to improve the comprehensibility and invertibility of the black-box models. However, existing methods rely on intuitive assumptions and lack mathematical guarantees. To bridge this gap, we introduce Bort, an optimizer for improving model explainability with Boundedness and orthogonality constraints on model parameters, derived from the sufficient conditions of model comprehensibility and invertibility. We perform reconstruction and backtracking on the model representations optimized by Bort and observe a clear improvement in model explainability. Based on Bort, we are able to synthesize explainable adversarial samples without additional parameters and training. Surprisingly, we find Bort constantly improves the classification accuracy of various architectures including ResNet and DeiT on MNIST, CIFAR-10, and ImageNet. Code:

1. INTRODUCTION

The success of deep neural networks (DNNs) has promoted almost every artificial intelligence application. However, the black-box nature of DNNs hinders humans from understanding how they complete complex analyses. Explainable models are especially desired for reliability-demanding industries such as autonomous driving and quantitative finance. Complicated as DNNs are, they work as mapping functions to connect the input data space and the latent variable spaces (Lu et al., 2017; Zhou, 2020) . Therefore, we consider explainability in both mapping directions. (Forward) Comprehensibility: the ability to generate an intuitive understanding of how each module transforms the inputs into the latent variables. (Backward) Invertibility: the ability to inverse the latent variables to the original space. We deem a model explainable if it possesses comprehensibility and invertibility simultaneously. We provide the formal descriptions of the two properties in Section 3.1. 



Figure 1: Bort improves explainability and performance simultaneously. (a) Examples of reconstruction and saliency analysis. (b) Top-1 accuracy with various networks and optimizers on ImageNet.Existing literature on explainability can be mainly categorized into black-box and white-box approaches based on whether involving internal variables. Black-box explanations focus on the external behavior of the original complex model without considering the latent states(Zhou et al., 2016;

availability

https://github

