ETH Zurich AI Researchers Introduce ‘tntorch’: a PyTorch-Powered Tensor Learning Python Library That Supports Multiple Decompositions Under a Unified Interface

Tensors are an effective method for handling and representing multidimensional data arrays. However, they have a limitation in terms of storage and computation. Tensor decompositions are crucial in machine learning because they factorize the weights of neural networks. This research introduces tntorch, an open-source python package for tensor learning that supports several decompositions through a single user interface. In contrast to the state-of-the-art packages, tntorch emphasizes an easy-to-use, decomposition-independent interface inherited from PyTorch. 

Several decomposition models that are crucial in machine learning, such as CANDEDOMP/ PARAFAC (CP), the Tucker decomposition, and the tensor train (TT), is supported by tntorch. Fig. 1 illustrates examples of tensor networks that tntorch can assemble. Tensors in tntorch can integrate more than one format; for example, one can interleave CP and TT cores, connect Tucker factors to TT cores, or even combine all three formats differently.

Source: https://arxiv.org/pdf/2206.11128v1.pdf

The basic decompositions represented by tntorch employ CP, TT, and Tucker. CP represents a series of 2D factor matrices,  TT represents a series of 3D tensor train cores, and Tucker represents TT-like cores that have been carefully chosen to be error-free. Table 1 shows a feature comparison of tntorch and six related libraries. The cross-approximation technique is used by tntorch to extract a compressed TT tensor from a black-box function. For discrete problems, cross-approximation has also been adapted as a global optimizer with no gradient. This research implements TT matrix decomposition as well as CP matrix decomposition. Both Matrix is implemented as a separate class CPMatrix since they need custom operations that are not accessible for CP tensors.

Source: https://arxiv.org/pdf/2206.11128v1.pdf

To learn incomplete tensors, tensors with constraints, or to add other loss terms, tntorch can be utilized. Tensors can be accessed in tntorch using various methods, including basic indexing, fancy indexing, indexing using Numpy Arrays, and inserting dummy dimensions. Various arithmetic operations include tensor vector and tensor matrix products, element-wise operations, dot products, convolution, concatenation, mode reordering, padding, orthogonalization, and rank truncation are supported by the tntorch library. The TT-SVD technique is used in this study to decompose multiple tensors into TT cores simultaneously. Numerous TT matrix decomposition techniques are used in this study, including quick matrix inverse, linear algebra operations, and determinant algorithms for rank-1 TT matrices that are equivalent to Kronecker products.

The four modalities tested in this work are CPU vs. GPU and loop vs. vectorized batch processing in both situations. Except for the TT-SVD experiment, which uses N = 4, all experiments utilize randomly initialized tensors with TTrank R = 20, physical dimension sizes I = 15;::; 45, and N = 8 dimensions. PyTorch 1.13.0a0+git87148f2 and NumPy 1.22.4 were utilized by the authors on an Intel(R) Core(TM) i7-7700K processor with 64GB of RAM and an NVIDIA GeForce RTX 3090 GPU. The experimental results demonstrate how the GPU performs better in batch and non-batch modes. Additionally, tntorch scales concerning tensor size are better (or similarly, for cross-approximation) than the baseline, making it a suitable choice for ML applications that require a lot of data. 

To conclude, The PyTorch-powered library is intended to integrate many capable tensor formats under a single user interface and provide various analytic tools and methodologies. It gives machine learning access to the power of low-rank tensor decompositions while maintaining the excellent appearance and feel of PyTorch tensors. Many standard features of modern machine learning frameworks are included in the library, such as auto-differentiation, GPU and batch processing, and advanced indexing.

This Article is written as a paper summary article by Marktechpost Research Staff based on the paper 'TNTORCH: TENSOR NETWORK LEARNING WITH PYTORCH'. All Credit For This Research Goes To Researchers on This Project. Checkout the paper and github.

Please Don't Forget To Join Our ML Subreddit

Credit: Source link

Comments are closed.