NEURAL RANDOM PROJECTION: FROM THE INITIAL TASK TO THE INPUT SIMILARITY PROBLEM Anonymous

Abstract

The data representation plays an important role in evaluating similarity between objects. In this paper, we propose a novel approach for implicit data representation to evaluate similarity of input data using a trained neural network. In contrast to the previous approach, which uses gradients for representation, we utilize only the outputs from the last hidden layer of a neural network and do not use a backward step. The proposed technique explicitly takes into account the initial task and significantly reduces the size of the vector representation, as well as the computation time. Generally, a neural network obtains representations related only to the problem being solved, which makes the last hidden layer representation useless for input similarity task. In this paper, we consider two reasons for the decline in the quality of representations: correlation between neurons and insufficient size of the last hidden layer. To reduce the correlation between neurons we use orthogonal weight initialization for each layer and modify the loss function to ensure orthogonality of the weights during training. Moreover, we show that activation functions can potentially increase correlation. To solve this problem, we apply modified Batch-Normalization with Dropout. Using orthogonal weight matrices allow us to consider such neural networks as an application of the Random Projection method and get a lower bound estimate for the size of the last hidden layer. We perform experiments on MNIST and physical examination datasets. In both experiments, initially, we split a set of labels into two disjoint subsets to train a neural network for binary classification problem, and then use this model to measure similarity between input data and define hidden classes. We also cluster the inputs to evaluate how well objects from the same hidden class are grouped together. Our experimental results show that the proposed approach achieves competitive results on the input similarity task while reducing both computation time and the size of the input representation.

1. INTRODUCTION

Evaluating object similarity is an important area in machine learning literature. It is used in various applications such as search query matching, image similarity search, recommender systems, clustering, classification. In practice, the quality of similarity evaluation methods depends on the data representation. For a long time neural networks show successful results in many tasks and one such task is obtaining good representations. Many of these methods can be considered in terms of domain and task. The first case is when we have only unlabeled dataset. Then we can use autoencoders (Bank et al., 2020) or self-supervised approaches (Chen et al., 2020; Devlin et al., 2018; Doersch et al., 2015; Dosovitskiy et al., 2014; Gidaris et al., 2018; Noroozi & Favaro, 2016; Noroozi et al., 2017; Oord et al., 2018; Peters et al., 2018) , which require formulation of a pretext task, which in most cases depends on the data type. These methods can be called explicit because they directly solve the problem of representation learning. Moreover, these models can be used for transfer knowledge when we have labeled data only in the target domain. The second case is where we have labeled data in the source and target domains. Then we can apply a multi-task learning approach (Ruder, 2017) or fine-tune the models (Simonyan & Zisserman, 2014; He et al., 2016) trained on a large dataset like ImageNet. Finally, there is the domain adaptation approach (Wang & Deng, 2018) where we have a single task but different source and target domains with labeled data in the target domain (Hu et al., 2015) or with unlabeled data (Li et al., 2016; Yan et al., 2017; Zellinger et al., 2017) . In our study the target task is to measure similarity between objects and to define hidden classes based on it. We are interested in studying the issue of implicit learning of representations. Can the neural networks store information about subcategories if we don't explicitly train them to do this? More formally, we have the same source and target domains but different tasks and we don't have labeled data in the target domain. That makes our case different from the cases of transfer learning. A solution to this problem could be useful in many practical cases. For example, we could train a model to classify whether messages are spam or not and then group spam campaigns or kind of attacks (phishing, spoofing, etc.) based on similarity measuring by trained neural network. Similar cases could be in the medicine (classifying patients into healthy/sick and grouping them by the disease) or in financial (credit scoring) area. The benefits are that we do not depend on the data type and, more importantly, we use only one model for different tasks without fine-tuning, which significantly reduces time for developing and supporting of several models. Similar study was done in (Hanawa et al., 2020) , where authors proposed evaluation criteria for instance-based explanation of decisions made by neural network and tested several metrics for measuring input similarity. In particular, they proposed the Identical subclass test which checks whether two objects considered similar are from the same subclass. According to the results of their experiments, the most qualitative approach is the approach presented in (Charpiat et al., 2019) , which proposed to measure similarity between objects using the gradients of a neural network. In experiments, the authors applied their approach to the analysis of the self-denoising phenomenon. Despite the fact that this method has theoretical guaranties and does not require to modify the model to use it, in practice, especially in real-time tasks, using gradients tends to increase the computation time and size of vector representation. This approach will be described in more detail in Section 2. To avoid these problems, we propose a method that only uses outputs from the last hidden layer of a neural network and does not use a backward step to vectorize the input. In our research, we found that a correlation of neurons and insufficient width of the last hidden layer influence on the quality of representations obtained in implicit way. To solve these issues, we propose several modifications. First, we show that the weight matrix should be orthogonal. Second, we modify Batch-Normalization (Ioffe & Szegedy, 2015) to obtain the necessary mathematical properties, and use it with dropout (Srivastava et al., 2014) to reduce the correlation caused by nonlinear activation functions. Using orthogonal weight matrices allows us to consider the neural network in terms of Random Projection method and evaluate the lower bound of the width of the last hidden layer. Our approach will be discussed in detail in Section 3. Finally, in Section 4 we perform experiments on MNIST dataset and physical examination dataset (Maxwell et al., 2017) . We used these datasets to show that our approach can be applied for any type of data and combined with different architectures of neural networks. In both experiments, we split a set of labels into two disjoint subsets to train a neural network for binary classification problem, and then use this model to measure the similarity between input data and define hidden classes. We also cluster the inputs to evaluate how well objects from the same class are grouped together. Our experimental results show that the proposed approach achieves competitive results on the input similarity task while reducing both computation time and the size of the input representation.

2. RELATED WORKS

Using a trained neural network to measure similarity of inputs is a new research topic. In (Charpiat et al., 2019) the authors introduce the notion of object similarity from the neural network perspective. The main idea is as follows: how much would parameter variation that changed the output for x impact the output for x ? In principle, if the objects x and x are similar, then changing parameters should affect the outputs in a similar way. The following is a formal description for one-and multidimensional cases of the output value of a neural network. One-dimensional case Let f θ (x) ∈ R be a parametric function, in particular a neural network, x, x be input objects, θ ∈ R n θ -model parameters, n θ -number of parameters. The authors proposed the following metric:

