BORT: TOWARDS EXPLAINABLE NEURAL NETWORKS WITH BOUNDED ORTHOGONAL CONSTRAINT

Abstract

Deep learning has revolutionized human society, yet the black-box nature of deep neural networks hinders further application to reliability-demanding industries. In the attempt to unpack them, many works observe or impact internal variables to improve the comprehensibility and invertibility of the black-box models. However, existing methods rely on intuitive assumptions and lack mathematical guarantees. To bridge this gap, we introduce Bort, an optimizer for improving model explainability with Boundedness and orthogonality constraints on model parameters, derived from the sufficient conditions of model comprehensibility and invertibility. We perform reconstruction and backtracking on the model representations optimized by Bort and observe a clear improvement in model explainability. Based on Bort, we are able to synthesize explainable adversarial samples without additional parameters and training. Surprisingly, we find Bort constantly improves the classification accuracy of various architectures including ResNet and DeiT on MNIST, CIFAR-10, and ImageNet. Code:

1. INTRODUCTION

The success of deep neural networks (DNNs) has promoted almost every artificial intelligence application. However, the black-box nature of DNNs hinders humans from understanding how they complete complex analyses. Explainable models are especially desired for reliability-demanding industries such as autonomous driving and quantitative finance. Complicated as DNNs are, they work as mapping functions to connect the input data space and the latent variable spaces (Lu et al., 2017; Zhou, 2020) . Therefore, we consider explainability in both mapping directions. (Forward) Comprehensibility: the ability to generate an intuitive understanding of how each module transforms the inputs into the latent variables. (Backward) Invertibility: the ability to inverse the latent variables to the original space. We deem a model explainable if it possesses comprehensibility and invertibility simultaneously. We provide the formal descriptions of the two properties in Section 3.1. usually employ a linear combination of kernels layer by layer for feature reconstruction. However, they ignore the potential entanglement between kernels and thus lead to suboptimal reconstruction. We find that almost all explainability literature is based on specific assumptions, which may be objectively incorrect or have no causal connection to the actual mechanism of the model. To bridge this gap, we give formal definitions of comprehensibility and invertibility and derive their sufficient conditions as boundedness and orthogonality, respectively. We further introduce an optimizer with Bounded orthogonal constraint, Bort, as an effective and efficient instantiation of our method. Extensive experiments demonstrate the effectiveness of Bort in both model explainability and performance shown in Figure 1 . We highlight our contributions as follows: • Mathematical interpretation of explainability. We further derive boundedness and orthogonality as the sufficient conditions of explainability for neural networks. et al., 2018; Shen et al., 2021; Liang et al., 2020) to improve the model explainability by forcing each filter to represent a specific data pattern. Transformation invariance constraints (Wang & Wang, 2021) later emerge to improve explainability robustness. However, these methods usually suffer from the tradeoff between performance and explainability and cannot be generalized to different architectures. To break through this dilemma, we propose Bort, an optimizer with bounded orthogonal constraints, which improves both the model performance and explainability.



Figure 1: Bort improves explainability and performance simultaneously. (a) Examples of reconstruction and saliency analysis. (b) Top-1 accuracy with various networks and optimizers on ImageNet.Existing literature on explainability can be mainly categorized into black-box and white-box approaches based on whether involving internal variables. Black-box explanations focus on the external behavior of the original complex model without considering the latent states(Zhou et al., 2016;

• A plug-and-play optimizer, Bort, to improve explainability. Bort can be generally applied to any feedforward neural networks such as MLPs, CNNs, and ViTs. • Clear improvement of model explainability. In addition to better reconstruction and backtracking results, we can synthesize explainable adversarial examples without training. • Consistent improvement of classification accuracy. Bort improves the performance of various deep models including CNNs and ViTs on MNIST, CIFAR10, and ImageNet.

availability

https://github

