35. High-Performance Tensor Contraction without BLAS
Authors: Devin A. Matthews (University of Texas at Austin)
Abstract: Tensor computations – in particular tensor contraction (TC) – are important kernels in many scientific computing applications (SCAs). Due to the fundamental similarity of TC to matrix multiplication (MM) and to the availability of optimized implementations such as the BLAS, tensor operations have traditionally been implemented in terms of BLAS operations, incurring both a performance and a storage overhead. Instead, we implement TC using the flexible BLIS framework, which allows for reshaping of the tensor to be fused with partitioning and packing operations, requiring no reshaping operations or additional workspace. This implementation, TBLIS, achieves performance approaching that of MM, and in some cases considerably higher than that of traditional TC. Our implementation also supports multithreading using an approach identical to that used for MM in BLIS, with similar performance characteristics. The complexity of managing tensor-to-matrix transformations is also handled automatically in our approach, greatly simplifying use in SCAs.
Two-page extended abstract: pdf