CONSTRUCTIVE TT-REPRESENTATION OF THE TENSORS GIVEN AS INDEX INTERACTION FUNCTIONS WITH APPLICATIONS

Abstract

This paper presents a method to build explicit tensor-train (TT) representations. We show that a wide class of tensors can be explicitly represented with sparse TTcores, obtaining, in many cases, optimal TT-ranks. Numerical experiments show that our method outperforms the existing ones in several practical applications, including game theory problems. Theoretical estimations of the number of operations show that in some problems, such as permanent calculation, our methods are close to the known optimal asymptotics, which are obtained by a completely different type of methods.

1. INTRODUCTION

The tensor train is a powerful tool for compressing multidimensional tensors (by tensor we mean a multidimensional array of complex numbers). It allows us to circumvent the curse of dimensionality in a number of cases. In a case of d-dimensional tensor with number of indices equal to n for each dimension, direct storage of tensor involves O(n d ) memory cells, while tensor train bypasses O(ndr 2 ), where r is average rank of TT decomposition (Oseledets, 2011) . In many important applications, the average rank may be small enough so that n d ≫ ndr 2 . Tensor approximation is a hot topic in the area of machine learning. For example, in the paper (Richter et al., 2021) tensor train format is used to solve high-dimensional parabolic PDE with dimension in numerical experiments up to d ∼ 10 2 . Problems of building tensor decomposition and tensor completion are considered in (Lacroix et al., 2020; Fan, 2022; Ma & Solomonik, 2021) . The properties of tensor decompositions as applied to machine learning tasks are discussed in (Ghalamkari & Sugiyama, 2021; Kileel et al., 2021; Khavari & Rabusseau, 2021) . Existing methods allow one to build TT-decompositions by treating the tensor values as a black box. The TT-cross approximation method (Oseledets & Tyrtyshnikov, 2010) adaptively queries the points where the tensor value is evaluated. The iterative alternative schemes such as alternating least squares method (Oseledets & Dolgov, 2012) or alternative linear schemes (Holtz et al., 2012) , build a decomposition consistently updating the decomposition cores. These methods do not take into account the analytic dependence, if any, of the tensor value on its indices. At the same time, even for relatively simple tensors, these methods can build a TT decomposition for a long time and in the vast majority of cases obtain an answer with a given error greater than zero, even if the original tensor has an exact TT decomposition. In this paper, we present a fast method to directly construct cores of the TT decomposition of a tensor for which the analyticalfoot_0 dependence of the tensor value on the values of its indices is known. Technically, our method works with functions, each of which depends on tensor index and which are sequentially applied to the values of the previous functions. This functions we call derivative



By analytic dependence we mean the known symbolic formula for the tensor value, not the definition of the term within complex analysis.

